Test Report: Docker_Linux_crio_arm64 19373

                    
                      afa0c1cf199b27e59d48f8572184259dc9d34cb2:2024-08-06:35664
                    
                

Test fail (5/335)

Order failed test Duration
43 TestAddons/parallel/Ingress 152.97
45 TestAddons/parallel/MetricsServer 334.31
106 TestFunctional/parallel/PersistentVolumeClaim 188.7
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 240.89
156 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 69.86
x
+
TestAddons/parallel/Ingress (152.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-554168 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-554168 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-554168 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c286abb4-13f2-4fd3-b341-8334e3926d66] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c286abb4-13f2-4fd3-b341-8334e3926d66] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00472114s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-554168 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.152963754s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-554168 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-554168 addons disable ingress-dns --alsologtostderr -v=1: (1.139692708s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-554168 addons disable ingress --alsologtostderr -v=1: (7.780758023s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-554168
helpers_test.go:235: (dbg) docker inspect addons-554168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f",
	        "Created": "2024-08-05T22:49:41.710004288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1566619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-05T22:49:41.847014635Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f/hostname",
	        "HostsPath": "/var/lib/docker/containers/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f/hosts",
	        "LogPath": "/var/lib/docker/containers/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f-json.log",
	        "Name": "/addons-554168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-554168:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-554168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fae22c08e1a29bc4f998345685a1d64bd4fd698c65da604d53ce1513d2c635fd-init/diff:/var/lib/docker/overlay2/86ccb695426d1801c241efb9fd4274cb7838d591a3ef1deb45fd2daef819089e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fae22c08e1a29bc4f998345685a1d64bd4fd698c65da604d53ce1513d2c635fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fae22c08e1a29bc4f998345685a1d64bd4fd698c65da604d53ce1513d2c635fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fae22c08e1a29bc4f998345685a1d64bd4fd698c65da604d53ce1513d2c635fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-554168",
	                "Source": "/var/lib/docker/volumes/addons-554168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-554168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-554168",
	                "name.minikube.sigs.k8s.io": "addons-554168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d2d7ac081a5f115bf9c853d50fa4efab21fbdba26e321a233c2b94a476826576",
	            "SandboxKey": "/var/run/docker/netns/d2d7ac081a5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34637"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34638"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34641"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34639"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34640"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-554168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a91da328e200f47df71e5a86f827f35495060e6334363392667e57e82f61a2c6",
	                    "EndpointID": "756b160513df82d17a803a0bc0c9b7f24bd1cba6d70ec9516860d76ef0d25dbc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-554168",
	                        "00fe2ccfdede"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-554168 -n addons-554168
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-554168 logs -n 25: (1.425999497s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-926965                                                                     | download-only-926965   | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| start   | --download-only -p                                                                          | download-docker-565200 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | download-docker-565200                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-565200                                                                   | download-docker-565200 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-045657   | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | binary-mirror-045657                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36853                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-045657                                                                     | binary-mirror-045657   | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| addons  | disable dashboard -p                                                                        | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | addons-554168                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | addons-554168                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-554168 --wait=true                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-554168 ip                                                                            | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | -p addons-554168                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-554168 ssh cat                                                                       | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | /opt/local-path-provisioner/pvc-61300692-a5b6-4c41-ab58-cbf29128fef9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:54 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-554168 addons                                                                        | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-554168 addons                                                                        | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | addons-554168                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | -p addons-554168                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | addons-554168                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-554168 ssh curl -s                                                                   | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:55 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-554168 ip                                                                            | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:49:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:49:15.926842 1566127 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:49:15.926977 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:15.927140 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:49:15.927157 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:15.927392 1566127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 22:49:15.927912 1566127 out.go:298] Setting JSON to false
	I0805 22:49:15.928801 1566127 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27096,"bootTime":1722871060,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 22:49:15.928879 1566127 start.go:139] virtualization:  
	I0805 22:49:15.931641 1566127 out.go:177] * [addons-554168] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 22:49:15.934606 1566127 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 22:49:15.934655 1566127 notify.go:220] Checking for updates...
	I0805 22:49:15.937045 1566127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:49:15.939618 1566127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 22:49:15.941987 1566127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 22:49:15.944091 1566127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 22:49:15.946728 1566127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 22:49:15.949139 1566127 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:49:15.970171 1566127 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 22:49:15.970293 1566127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:49:16.044322 1566127 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 22:49:16.033803322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:49:16.044451 1566127 docker.go:307] overlay module found
	I0805 22:49:16.046620 1566127 out.go:177] * Using the docker driver based on user configuration
	I0805 22:49:16.048448 1566127 start.go:297] selected driver: docker
	I0805 22:49:16.048472 1566127 start.go:901] validating driver "docker" against <nil>
	I0805 22:49:16.048488 1566127 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 22:49:16.049229 1566127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:49:16.100271 1566127 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 22:49:16.090520508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:49:16.100432 1566127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:49:16.100757 1566127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 22:49:16.102435 1566127 out.go:177] * Using Docker driver with root privileges
	I0805 22:49:16.104207 1566127 cni.go:84] Creating CNI manager for ""
	I0805 22:49:16.104230 1566127 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:49:16.104243 1566127 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 22:49:16.104342 1566127 start.go:340] cluster config:
	{Name:addons-554168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:49:16.106532 1566127 out.go:177] * Starting "addons-554168" primary control-plane node in "addons-554168" cluster
	I0805 22:49:16.108064 1566127 cache.go:121] Beginning downloading kic base image for docker with crio
	I0805 22:49:16.109825 1566127 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0805 22:49:16.111787 1566127 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 22:49:16.111954 1566127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:49:16.111988 1566127 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0805 22:49:16.112000 1566127 cache.go:56] Caching tarball of preloaded images
	I0805 22:49:16.112066 1566127 preload.go:172] Found /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0805 22:49:16.112081 1566127 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 22:49:16.112416 1566127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/config.json ...
	I0805 22:49:16.112442 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/config.json: {Name:mkaaf90554ae570281dc409936a60acfcebfaea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:16.128238 1566127 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 22:49:16.128366 1566127 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 22:49:16.128390 1566127 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0805 22:49:16.128398 1566127 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0805 22:49:16.128406 1566127 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0805 22:49:16.128415 1566127 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0805 22:49:33.150301 1566127 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0805 22:49:33.150347 1566127 cache.go:194] Successfully downloaded all kic artifacts
	I0805 22:49:33.150380 1566127 start.go:360] acquireMachinesLock for addons-554168: {Name:mk99fd9ec2c5ec7bf0bc1e27cb3a59cdbefafe59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:49:33.151094 1566127 start.go:364] duration metric: took 686.655µs to acquireMachinesLock for "addons-554168"
	I0805 22:49:33.151134 1566127 start.go:93] Provisioning new machine with config: &{Name:addons-554168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 22:49:33.151229 1566127 start.go:125] createHost starting for "" (driver="docker")
	I0805 22:49:33.153580 1566127 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0805 22:49:33.153832 1566127 start.go:159] libmachine.API.Create for "addons-554168" (driver="docker")
	I0805 22:49:33.153868 1566127 client.go:168] LocalClient.Create starting
	I0805 22:49:33.154005 1566127 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem
	I0805 22:49:34.102408 1566127 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem
	I0805 22:49:35.223392 1566127 cli_runner.go:164] Run: docker network inspect addons-554168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0805 22:49:35.239271 1566127 cli_runner.go:211] docker network inspect addons-554168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0805 22:49:35.239369 1566127 network_create.go:284] running [docker network inspect addons-554168] to gather additional debugging logs...
	I0805 22:49:35.239393 1566127 cli_runner.go:164] Run: docker network inspect addons-554168
	W0805 22:49:35.255244 1566127 cli_runner.go:211] docker network inspect addons-554168 returned with exit code 1
	I0805 22:49:35.255294 1566127 network_create.go:287] error running [docker network inspect addons-554168]: docker network inspect addons-554168: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-554168 not found
	I0805 22:49:35.255324 1566127 network_create.go:289] output of [docker network inspect addons-554168]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-554168 not found
	
	** /stderr **
	I0805 22:49:35.255432 1566127 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0805 22:49:35.270195 1566127 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000ecb0}
	I0805 22:49:35.270241 1566127 network_create.go:124] attempt to create docker network addons-554168 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0805 22:49:35.270346 1566127 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-554168 addons-554168
	I0805 22:49:35.345733 1566127 network_create.go:108] docker network addons-554168 192.168.49.0/24 created
	I0805 22:49:35.345768 1566127 kic.go:121] calculated static IP "192.168.49.2" for the "addons-554168" container
	I0805 22:49:35.345843 1566127 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0805 22:49:35.366850 1566127 cli_runner.go:164] Run: docker volume create addons-554168 --label name.minikube.sigs.k8s.io=addons-554168 --label created_by.minikube.sigs.k8s.io=true
	I0805 22:49:35.383637 1566127 oci.go:103] Successfully created a docker volume addons-554168
	I0805 22:49:35.383730 1566127 cli_runner.go:164] Run: docker run --rm --name addons-554168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-554168 --entrypoint /usr/bin/test -v addons-554168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0805 22:49:37.365232 1566127 cli_runner.go:217] Completed: docker run --rm --name addons-554168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-554168 --entrypoint /usr/bin/test -v addons-554168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (1.981457579s)
	I0805 22:49:37.365265 1566127 oci.go:107] Successfully prepared a docker volume addons-554168
	I0805 22:49:37.365288 1566127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:49:37.365308 1566127 kic.go:194] Starting extracting preloaded images to volume ...
	I0805 22:49:37.365397 1566127 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-554168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0805 22:49:41.639419 1566127 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-554168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.273974438s)
	I0805 22:49:41.639455 1566127 kic.go:203] duration metric: took 4.274143633s to extract preloaded images to volume ...
	W0805 22:49:41.639598 1566127 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0805 22:49:41.639712 1566127 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0805 22:49:41.695473 1566127 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-554168 --name addons-554168 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-554168 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-554168 --network addons-554168 --ip 192.168.49.2 --volume addons-554168:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0805 22:49:42.020286 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Running}}
	I0805 22:49:42.051796 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:49:42.075264 1566127 cli_runner.go:164] Run: docker exec addons-554168 stat /var/lib/dpkg/alternatives/iptables
	I0805 22:49:42.163739 1566127 oci.go:144] the created container "addons-554168" has a running status.
	I0805 22:49:42.163775 1566127 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa...
	I0805 22:49:42.765998 1566127 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0805 22:49:42.797662 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:49:42.820714 1566127 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0805 22:49:42.820741 1566127 kic_runner.go:114] Args: [docker exec --privileged addons-554168 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0805 22:49:42.900101 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:49:42.924858 1566127 machine.go:94] provisionDockerMachine start ...
	I0805 22:49:42.924955 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:42.955057 1566127 main.go:141] libmachine: Using SSH client type: native
	I0805 22:49:42.955326 1566127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34637 <nil> <nil>}
	I0805 22:49:42.955342 1566127 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 22:49:43.101295 1566127 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-554168
	
	I0805 22:49:43.101322 1566127 ubuntu.go:169] provisioning hostname "addons-554168"
	I0805 22:49:43.101390 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:43.123503 1566127 main.go:141] libmachine: Using SSH client type: native
	I0805 22:49:43.123760 1566127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34637 <nil> <nil>}
	I0805 22:49:43.123772 1566127 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-554168 && echo "addons-554168" | sudo tee /etc/hostname
	I0805 22:49:43.277247 1566127 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-554168
	
	I0805 22:49:43.277347 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:43.298250 1566127 main.go:141] libmachine: Using SSH client type: native
	I0805 22:49:43.298510 1566127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34637 <nil> <nil>}
	I0805 22:49:43.298534 1566127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-554168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-554168/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-554168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 22:49:43.432711 1566127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 22:49:43.432783 1566127 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19373-1559727/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-1559727/.minikube}
	I0805 22:49:43.432822 1566127 ubuntu.go:177] setting up certificates
	I0805 22:49:43.432832 1566127 provision.go:84] configureAuth start
	I0805 22:49:43.432903 1566127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-554168
	I0805 22:49:43.449429 1566127 provision.go:143] copyHostCerts
	I0805 22:49:43.449516 1566127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.pem (1078 bytes)
	I0805 22:49:43.449656 1566127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/cert.pem (1123 bytes)
	I0805 22:49:43.449731 1566127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/key.pem (1679 bytes)
	I0805 22:49:43.449797 1566127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem org=jenkins.addons-554168 san=[127.0.0.1 192.168.49.2 addons-554168 localhost minikube]
	I0805 22:49:43.917729 1566127 provision.go:177] copyRemoteCerts
	I0805 22:49:43.917811 1566127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 22:49:43.917860 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:43.934205 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.030068 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 22:49:44.055161 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 22:49:44.079624 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 22:49:44.105929 1566127 provision.go:87] duration metric: took 673.081336ms to configureAuth
	I0805 22:49:44.105956 1566127 ubuntu.go:193] setting minikube options for container-runtime
	I0805 22:49:44.106155 1566127 config.go:182] Loaded profile config "addons-554168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:49:44.106274 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.122399 1566127 main.go:141] libmachine: Using SSH client type: native
	I0805 22:49:44.122647 1566127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34637 <nil> <nil>}
	I0805 22:49:44.122671 1566127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 22:49:44.353701 1566127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 22:49:44.353727 1566127 machine.go:97] duration metric: took 1.428845755s to provisionDockerMachine
	I0805 22:49:44.353737 1566127 client.go:171] duration metric: took 11.199863013s to LocalClient.Create
	I0805 22:49:44.353751 1566127 start.go:167] duration metric: took 11.199919398s to libmachine.API.Create "addons-554168"
	I0805 22:49:44.353758 1566127 start.go:293] postStartSetup for "addons-554168" (driver="docker")
	I0805 22:49:44.353771 1566127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 22:49:44.353842 1566127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 22:49:44.353930 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.371115 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.465754 1566127 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 22:49:44.468910 1566127 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0805 22:49:44.468945 1566127 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0805 22:49:44.468955 1566127 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0805 22:49:44.468962 1566127 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0805 22:49:44.468973 1566127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-1559727/.minikube/addons for local assets ...
	I0805 22:49:44.469057 1566127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-1559727/.minikube/files for local assets ...
	I0805 22:49:44.469079 1566127 start.go:296] duration metric: took 115.315555ms for postStartSetup
	I0805 22:49:44.469391 1566127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-554168
	I0805 22:49:44.485499 1566127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/config.json ...
	I0805 22:49:44.485813 1566127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 22:49:44.485869 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.504375 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.598270 1566127 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0805 22:49:44.602832 1566127 start.go:128] duration metric: took 11.451584134s to createHost
	I0805 22:49:44.602857 1566127 start.go:83] releasing machines lock for "addons-554168", held for 11.451744124s
	I0805 22:49:44.602929 1566127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-554168
	I0805 22:49:44.618721 1566127 ssh_runner.go:195] Run: cat /version.json
	I0805 22:49:44.618743 1566127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 22:49:44.618771 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.618786 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.637055 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.637853 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.732978 1566127 ssh_runner.go:195] Run: systemctl --version
	I0805 22:49:44.871905 1566127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 22:49:45.037706 1566127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 22:49:45.053244 1566127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 22:49:45.081309 1566127 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0805 22:49:45.081469 1566127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 22:49:45.133281 1566127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0805 22:49:45.133370 1566127 start.go:495] detecting cgroup driver to use...
	I0805 22:49:45.133443 1566127 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0805 22:49:45.133532 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 22:49:45.158587 1566127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 22:49:45.181794 1566127 docker.go:217] disabling cri-docker service (if available) ...
	I0805 22:49:45.182109 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 22:49:45.200666 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 22:49:45.220648 1566127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 22:49:45.338236 1566127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 22:49:45.445089 1566127 docker.go:233] disabling docker service ...
	I0805 22:49:45.445155 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 22:49:45.467864 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 22:49:45.481000 1566127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 22:49:45.575100 1566127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 22:49:45.675035 1566127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 22:49:45.687255 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 22:49:45.703002 1566127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 22:49:45.703110 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.713010 1566127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 22:49:45.713089 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.722946 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.732372 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.742601 1566127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 22:49:45.751669 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.761377 1566127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.776597 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.786342 1566127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 22:49:45.794852 1566127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 22:49:45.803213 1566127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:49:45.896578 1566127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 22:49:46.007689 1566127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 22:49:46.007811 1566127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 22:49:46.012059 1566127 start.go:563] Will wait 60s for crictl version
	I0805 22:49:46.012138 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:49:46.015788 1566127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 22:49:46.052400 1566127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0805 22:49:46.052526 1566127 ssh_runner.go:195] Run: crio --version
	I0805 22:49:46.090545 1566127 ssh_runner.go:195] Run: crio --version
	I0805 22:49:46.130981 1566127 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0805 22:49:46.132893 1566127 cli_runner.go:164] Run: docker network inspect addons-554168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0805 22:49:46.148236 1566127 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0805 22:49:46.151999 1566127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 22:49:46.162630 1566127 kubeadm.go:883] updating cluster {Name:addons-554168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 22:49:46.162753 1566127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:49:46.162819 1566127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 22:49:46.240769 1566127 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 22:49:46.240794 1566127 crio.go:433] Images already preloaded, skipping extraction
	I0805 22:49:46.240849 1566127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 22:49:46.276671 1566127 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 22:49:46.276696 1566127 cache_images.go:84] Images are preloaded, skipping loading
	I0805 22:49:46.276704 1566127 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0805 22:49:46.276805 1566127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-554168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 22:49:46.276890 1566127 ssh_runner.go:195] Run: crio config
	I0805 22:49:46.330094 1566127 cni.go:84] Creating CNI manager for ""
	I0805 22:49:46.330122 1566127 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:49:46.330131 1566127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 22:49:46.330192 1566127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-554168 NodeName:addons-554168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 22:49:46.330358 1566127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-554168"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 22:49:46.330441 1566127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 22:49:46.338992 1566127 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 22:49:46.339112 1566127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 22:49:46.347447 1566127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0805 22:49:46.365334 1566127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 22:49:46.383399 1566127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0805 22:49:46.401499 1566127 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0805 22:49:46.404841 1566127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 22:49:46.415150 1566127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:49:46.505736 1566127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 22:49:46.520069 1566127 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168 for IP: 192.168.49.2
	I0805 22:49:46.520135 1566127 certs.go:194] generating shared ca certs ...
	I0805 22:49:46.520165 1566127 certs.go:226] acquiring lock for ca certs: {Name:mk45a3b9d27e38f3abe9128d73d1ec1f570fe6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:46.520949 1566127 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key
	I0805 22:49:47.094710 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt ...
	I0805 22:49:47.094743 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt: {Name:mk36f596ece4fe743782bfc12058efc8b4800ec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:47.095526 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key ...
	I0805 22:49:47.095566 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key: {Name:mke4ba11bb197a5d9b523ed8404f8129f4886c0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:47.096188 1566127 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key
	I0805 22:49:47.853668 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.crt ...
	I0805 22:49:47.853703 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.crt: {Name:mkdd749e1e56ff4f622e209e7d20e736bad13104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:47.853894 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key ...
	I0805 22:49:47.853912 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key: {Name:mk082611ab0d6b76988b12bd06f7c6568264a404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:47.853989 1566127 certs.go:256] generating profile certs ...
	I0805 22:49:47.854052 1566127 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.key
	I0805 22:49:47.854067 1566127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt with IP's: []
	I0805 22:49:48.645261 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt ...
	I0805 22:49:48.645295 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: {Name:mk84ec6671d5f83acdfadf98752918d45c66853f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.646100 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.key ...
	I0805 22:49:48.646121 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.key: {Name:mkfdac9cd866d8539b411bf0d4357e5aae2e3ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.646271 1566127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key.613b444e
	I0805 22:49:48.646299 1566127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt.613b444e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0805 22:49:48.903510 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt.613b444e ...
	I0805 22:49:48.903548 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt.613b444e: {Name:mkc0414456cf3231fb046ce9605c72cafb4d26dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.904362 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key.613b444e ...
	I0805 22:49:48.904387 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key.613b444e: {Name:mkc96a745712a90db4ab834b3fd463b4bacab95e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.904526 1566127 certs.go:381] copying /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt.613b444e -> /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt
	I0805 22:49:48.904639 1566127 certs.go:385] copying /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key.613b444e -> /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key
	I0805 22:49:48.904697 1566127 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.key
	I0805 22:49:48.904720 1566127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.crt with IP's: []
	I0805 22:49:49.095118 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.crt ...
	I0805 22:49:49.095148 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.crt: {Name:mk203d37e55d86765d49897e2b602446e5239683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:49.095987 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.key ...
	I0805 22:49:49.096012 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.key: {Name:mkbb1b5267aa0f1a8fa6d0eda1ba781d0ceb8dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:49.096220 1566127 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 22:49:49.096270 1566127 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem (1078 bytes)
	I0805 22:49:49.096301 1566127 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem (1123 bytes)
	I0805 22:49:49.096330 1566127 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem (1679 bytes)
	I0805 22:49:49.096957 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 22:49:49.122100 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 22:49:49.148496 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 22:49:49.175316 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 22:49:49.199992 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 22:49:49.223896 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 22:49:49.248014 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 22:49:49.272272 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 22:49:49.296375 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 22:49:49.321195 1566127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 22:49:49.339067 1566127 ssh_runner.go:195] Run: openssl version
	I0805 22:49:49.344497 1566127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 22:49:49.354337 1566127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:49:49.357790 1566127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:49 /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:49:49.357857 1566127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:49:49.364738 1566127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 22:49:49.374046 1566127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 22:49:49.377275 1566127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 22:49:49.377326 1566127 kubeadm.go:392] StartCluster: {Name:addons-554168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:49:49.377421 1566127 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 22:49:49.377487 1566127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 22:49:49.417488 1566127 cri.go:89] found id: ""
	I0805 22:49:49.417609 1566127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 22:49:49.426568 1566127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 22:49:49.435439 1566127 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0805 22:49:49.435548 1566127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 22:49:49.444663 1566127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 22:49:49.444685 1566127 kubeadm.go:157] found existing configuration files:
	
	I0805 22:49:49.444741 1566127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 22:49:49.453554 1566127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 22:49:49.453647 1566127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 22:49:49.462230 1566127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 22:49:49.471038 1566127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 22:49:49.471109 1566127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 22:49:49.479268 1566127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 22:49:49.488024 1566127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 22:49:49.488119 1566127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 22:49:49.496498 1566127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 22:49:49.505603 1566127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 22:49:49.505701 1566127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 22:49:49.513934 1566127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0805 22:49:49.621965 1566127 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-aws\n", err: exit status 1
	I0805 22:49:49.694105 1566127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 22:50:08.117301 1566127 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 22:50:08.117358 1566127 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 22:50:08.117443 1566127 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0805 22:50:08.117496 1566127 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-aws
	I0805 22:50:08.117530 1566127 kubeadm.go:310] OS: Linux
	I0805 22:50:08.117577 1566127 kubeadm.go:310] CGROUPS_CPU: enabled
	I0805 22:50:08.117624 1566127 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0805 22:50:08.117670 1566127 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0805 22:50:08.117722 1566127 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0805 22:50:08.117769 1566127 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0805 22:50:08.117816 1566127 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0805 22:50:08.117859 1566127 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0805 22:50:08.117906 1566127 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0805 22:50:08.117953 1566127 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0805 22:50:08.118022 1566127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 22:50:08.118113 1566127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 22:50:08.118202 1566127 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 22:50:08.118263 1566127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 22:50:08.120497 1566127 out.go:204]   - Generating certificates and keys ...
	I0805 22:50:08.120613 1566127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 22:50:08.120685 1566127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 22:50:08.120754 1566127 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 22:50:08.120814 1566127 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 22:50:08.120876 1566127 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 22:50:08.120931 1566127 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 22:50:08.120989 1566127 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 22:50:08.121112 1566127 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-554168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0805 22:50:08.121168 1566127 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 22:50:08.121286 1566127 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-554168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0805 22:50:08.121353 1566127 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 22:50:08.121418 1566127 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 22:50:08.121464 1566127 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 22:50:08.121521 1566127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 22:50:08.121576 1566127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 22:50:08.121636 1566127 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 22:50:08.121694 1566127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 22:50:08.121759 1566127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 22:50:08.121816 1566127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 22:50:08.121898 1566127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 22:50:08.121965 1566127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 22:50:08.123692 1566127 out.go:204]   - Booting up control plane ...
	I0805 22:50:08.123808 1566127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 22:50:08.123895 1566127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 22:50:08.123989 1566127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 22:50:08.124118 1566127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 22:50:08.124209 1566127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 22:50:08.124253 1566127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 22:50:08.124406 1566127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 22:50:08.124487 1566127 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 22:50:08.124573 1566127 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502048523s
	I0805 22:50:08.124660 1566127 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 22:50:08.124731 1566127 kubeadm.go:310] [api-check] The API server is healthy after 7.002176584s
	I0805 22:50:08.124848 1566127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 22:50:08.124976 1566127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 22:50:08.125037 1566127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 22:50:08.125235 1566127 kubeadm.go:310] [mark-control-plane] Marking the node addons-554168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 22:50:08.125297 1566127 kubeadm.go:310] [bootstrap-token] Using token: ptxymf.hwassnejjeyita55
	I0805 22:50:08.127020 1566127 out.go:204]   - Configuring RBAC rules ...
	I0805 22:50:08.127124 1566127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 22:50:08.127207 1566127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 22:50:08.127341 1566127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 22:50:08.127482 1566127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 22:50:08.127593 1566127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 22:50:08.127675 1566127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 22:50:08.127787 1566127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 22:50:08.127828 1566127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 22:50:08.127872 1566127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 22:50:08.127877 1566127 kubeadm.go:310] 
	I0805 22:50:08.127935 1566127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 22:50:08.127940 1566127 kubeadm.go:310] 
	I0805 22:50:08.128014 1566127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 22:50:08.128018 1566127 kubeadm.go:310] 
	I0805 22:50:08.128042 1566127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 22:50:08.128098 1566127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 22:50:08.128168 1566127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 22:50:08.128173 1566127 kubeadm.go:310] 
	I0805 22:50:08.128225 1566127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 22:50:08.128230 1566127 kubeadm.go:310] 
	I0805 22:50:08.128276 1566127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 22:50:08.128280 1566127 kubeadm.go:310] 
	I0805 22:50:08.128330 1566127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 22:50:08.128402 1566127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 22:50:08.128469 1566127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 22:50:08.128473 1566127 kubeadm.go:310] 
	I0805 22:50:08.128588 1566127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 22:50:08.128728 1566127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 22:50:08.128746 1566127 kubeadm.go:310] 
	I0805 22:50:08.128835 1566127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ptxymf.hwassnejjeyita55 \
	I0805 22:50:08.128944 1566127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4344817edd8bd0039bbc7d4d6af60e654808fcdca6a599af4a5badecee199b0 \
	I0805 22:50:08.128968 1566127 kubeadm.go:310] 	--control-plane 
	I0805 22:50:08.128973 1566127 kubeadm.go:310] 
	I0805 22:50:08.129079 1566127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 22:50:08.129096 1566127 kubeadm.go:310] 
	I0805 22:50:08.129191 1566127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ptxymf.hwassnejjeyita55 \
	I0805 22:50:08.129335 1566127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4344817edd8bd0039bbc7d4d6af60e654808fcdca6a599af4a5badecee199b0 
	I0805 22:50:08.129358 1566127 cni.go:84] Creating CNI manager for ""
	I0805 22:50:08.129367 1566127 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:50:08.131320 1566127 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 22:50:08.132999 1566127 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 22:50:08.137144 1566127 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 22:50:08.137165 1566127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 22:50:08.156277 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 22:50:08.416052 1566127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 22:50:08.416150 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:08.416186 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-554168 minikube.k8s.io/updated_at=2024_08_05T22_50_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=addons-554168 minikube.k8s.io/primary=true
	I0805 22:50:08.594346 1566127 ops.go:34] apiserver oom_adj: -16
	I0805 22:50:08.594442 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:09.095302 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:09.594574 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:10.095304 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:10.594599 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:11.095096 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:11.594606 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:12.094695 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:12.595392 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:13.094615 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:13.595019 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:14.095010 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:14.594552 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:15.095338 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:15.595143 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:16.095041 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:16.595512 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:17.095365 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:17.595254 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:18.095323 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:18.594553 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:19.094609 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:19.595457 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:20.095640 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:20.595556 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:21.094963 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:21.190584 1566127 kubeadm.go:1113] duration metric: took 12.774505312s to wait for elevateKubeSystemPrivileges
	I0805 22:50:21.190614 1566127 kubeadm.go:394] duration metric: took 31.813292438s to StartCluster
	I0805 22:50:21.190632 1566127 settings.go:142] acquiring lock: {Name:mk3a1710a3f4cbefc7bc92fbb01d7e9e884b2ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:21.190758 1566127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 22:50:21.191144 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/kubeconfig: {Name:mk27f7706a4f201bd85010407a0f2ea984ce81b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:21.191338 1566127 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 22:50:21.191496 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 22:50:21.191776 1566127 config.go:182] Loaded profile config "addons-554168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:21.191784 1566127 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0805 22:50:21.191926 1566127 addons.go:69] Setting yakd=true in profile "addons-554168"
	I0805 22:50:21.191962 1566127 addons.go:234] Setting addon yakd=true in "addons-554168"
	I0805 22:50:21.191958 1566127 addons.go:69] Setting inspektor-gadget=true in profile "addons-554168"
	I0805 22:50:21.191990 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.191996 1566127 addons.go:234] Setting addon inspektor-gadget=true in "addons-554168"
	I0805 22:50:21.192019 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.192085 1566127 addons.go:69] Setting metrics-server=true in profile "addons-554168"
	I0805 22:50:21.192098 1566127 addons.go:234] Setting addon metrics-server=true in "addons-554168"
	I0805 22:50:21.192114 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.192460 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.192510 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.193330 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.194925 1566127 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-554168"
	I0805 22:50:21.195193 1566127 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-554168"
	I0805 22:50:21.195235 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.195057 1566127 addons.go:69] Setting registry=true in profile "addons-554168"
	I0805 22:50:21.196213 1566127 addons.go:234] Setting addon registry=true in "addons-554168"
	I0805 22:50:21.196251 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.196794 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.197796 1566127 addons.go:69] Setting cloud-spanner=true in profile "addons-554168"
	I0805 22:50:21.197831 1566127 addons.go:234] Setting addon cloud-spanner=true in "addons-554168"
	I0805 22:50:21.197870 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.198332 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.195069 1566127 addons.go:69] Setting storage-provisioner=true in profile "addons-554168"
	I0805 22:50:21.198520 1566127 addons.go:234] Setting addon storage-provisioner=true in "addons-554168"
	I0805 22:50:21.198548 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.199054 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.208437 1566127 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-554168"
	I0805 22:50:21.208541 1566127 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-554168"
	I0805 22:50:21.208750 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.209300 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.195080 1566127 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-554168"
	I0805 22:50:21.215022 1566127 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-554168"
	I0805 22:50:21.215329 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.195087 1566127 addons.go:69] Setting volcano=true in profile "addons-554168"
	I0805 22:50:21.228520 1566127 addons.go:234] Setting addon volcano=true in "addons-554168"
	I0805 22:50:21.228626 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.229089 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.229569 1566127 addons.go:69] Setting default-storageclass=true in profile "addons-554168"
	I0805 22:50:21.229630 1566127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-554168"
	I0805 22:50:21.229911 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.195095 1566127 addons.go:69] Setting volumesnapshots=true in profile "addons-554168"
	I0805 22:50:21.241627 1566127 addons.go:234] Setting addon volumesnapshots=true in "addons-554168"
	I0805 22:50:21.241686 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.242168 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.245065 1566127 addons.go:69] Setting gcp-auth=true in profile "addons-554168"
	I0805 22:50:21.245223 1566127 mustload.go:65] Loading cluster: addons-554168
	I0805 22:50:21.245419 1566127 config.go:182] Loaded profile config "addons-554168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:21.245650 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.245699 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.245226 1566127 out.go:177] * Verifying Kubernetes components...
	I0805 22:50:21.289253 1566127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:50:21.289661 1566127 addons.go:69] Setting ingress=true in profile "addons-554168"
	I0805 22:50:21.289689 1566127 addons.go:234] Setting addon ingress=true in "addons-554168"
	I0805 22:50:21.289729 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.290198 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.312142 1566127 addons.go:69] Setting ingress-dns=true in profile "addons-554168"
	I0805 22:50:21.312193 1566127 addons.go:234] Setting addon ingress-dns=true in "addons-554168"
	I0805 22:50:21.312254 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.312824 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.335585 1566127 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0805 22:50:21.339217 1566127 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 22:50:21.339295 1566127 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 22:50:21.339409 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.362182 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0805 22:50:21.364091 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0805 22:50:21.369189 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0805 22:50:21.371098 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0805 22:50:21.373182 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0805 22:50:21.380391 1566127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 22:50:21.386912 1566127 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0805 22:50:21.388028 1566127 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0805 22:50:21.390973 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0805 22:50:21.395263 1566127 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0805 22:50:21.395865 1566127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 22:50:21.395880 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 22:50:21.395958 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.388833 1566127 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0805 22:50:21.411521 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0805 22:50:21.411644 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.412270 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0805 22:50:21.412286 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0805 22:50:21.412342 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.414701 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0805 22:50:21.414723 1566127 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0805 22:50:21.414798 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.388844 1566127 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0805 22:50:21.420773 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0805 22:50:21.424840 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0805 22:50:21.424864 1566127 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0805 22:50:21.424926 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	W0805 22:50:21.433727 1566127 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0805 22:50:21.435589 1566127 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-554168"
	I0805 22:50:21.435644 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.436133 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.444175 1566127 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0805 22:50:21.457695 1566127 addons.go:234] Setting addon default-storageclass=true in "addons-554168"
	I0805 22:50:21.457780 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.459498 1566127 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 22:50:21.459514 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0805 22:50:21.459567 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.484126 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.514559 1566127 out.go:177]   - Using image docker.io/registry:2.8.3
	I0805 22:50:21.515251 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0805 22:50:21.519235 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0805 22:50:21.519348 1566127 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0805 22:50:21.519377 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0805 22:50:21.519482 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.524116 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0805 22:50:21.524157 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0805 22:50:21.524251 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.536663 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.552618 1566127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0805 22:50:21.565162 1566127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:21.580931 1566127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:21.595724 1566127 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0805 22:50:21.596202 1566127 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 22:50:21.596239 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0805 22:50:21.596408 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.597073 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.597603 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.598629 1566127 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 22:50:21.598645 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0805 22:50:21.598737 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.610233 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.614414 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.677106 1566127 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0805 22:50:21.681031 1566127 out.go:177]   - Using image docker.io/busybox:stable
	I0805 22:50:21.682820 1566127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 22:50:21.682840 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0805 22:50:21.682906 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.695629 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.696269 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.708793 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.732134 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.741913 1566127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 22:50:21.742189 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 22:50:21.746451 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.758619 1566127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 22:50:21.758640 1566127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 22:50:21.758718 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.776920 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.782016 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.810654 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.812724 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	W0805 22:50:21.814541 1566127 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0805 22:50:21.814583 1566127 retry.go:31] will retry after 291.014962ms: ssh: handshake failed: EOF
	I0805 22:50:22.039500 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0805 22:50:22.039531 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0805 22:50:22.054400 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0805 22:50:22.075438 1566127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 22:50:22.075463 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0805 22:50:22.093227 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 22:50:22.194889 1566127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 22:50:22.194924 1566127 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 22:50:22.195633 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 22:50:22.199911 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0805 22:50:22.199936 1566127 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0805 22:50:22.210873 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0805 22:50:22.210899 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0805 22:50:22.241798 1566127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0805 22:50:22.241825 1566127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0805 22:50:22.247232 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 22:50:22.262200 1566127 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0805 22:50:22.262231 1566127 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0805 22:50:22.316948 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 22:50:22.340855 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 22:50:22.341190 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0805 22:50:22.341210 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0805 22:50:22.409667 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0805 22:50:22.409692 1566127 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0805 22:50:22.444481 1566127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 22:50:22.444504 1566127 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 22:50:22.451509 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0805 22:50:22.451574 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0805 22:50:22.491903 1566127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0805 22:50:22.491971 1566127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0805 22:50:22.537981 1566127 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0805 22:50:22.538046 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0805 22:50:22.562533 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0805 22:50:22.562607 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0805 22:50:22.590803 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0805 22:50:22.590869 1566127 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0805 22:50:22.614727 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 22:50:22.646125 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0805 22:50:22.646196 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0805 22:50:22.693979 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 22:50:22.716703 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0805 22:50:22.716778 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0805 22:50:22.719509 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0805 22:50:22.730902 1566127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0805 22:50:22.730976 1566127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0805 22:50:22.782277 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0805 22:50:22.782359 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0805 22:50:22.872097 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0805 22:50:22.872168 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0805 22:50:22.887596 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0805 22:50:22.887670 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0805 22:50:22.983222 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0805 22:50:22.983299 1566127 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0805 22:50:23.005997 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0805 22:50:23.064729 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0805 22:50:23.064807 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0805 22:50:23.106266 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0805 22:50:23.106346 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0805 22:50:23.169662 1566127 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:23.169734 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0805 22:50:23.250454 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 22:50:23.250523 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0805 22:50:23.272206 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:23.275254 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0805 22:50:23.275323 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0805 22:50:23.334097 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 22:50:23.383915 1566127 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.641699225s)
	I0805 22:50:23.383991 1566127 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0805 22:50:23.384599 1566127 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.642662242s)
	I0805 22:50:23.386145 1566127 node_ready.go:35] waiting up to 6m0s for node "addons-554168" to be "Ready" ...
	I0805 22:50:23.396916 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0805 22:50:23.396986 1566127 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0805 22:50:23.513209 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0805 22:50:23.513286 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0805 22:50:23.644821 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0805 22:50:23.644845 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0805 22:50:23.809191 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 22:50:23.809228 1566127 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0805 22:50:23.956336 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 22:50:25.014979 1566127 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-554168" context rescaled to 1 replicas
	I0805 22:50:25.635846 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:26.053425 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.998987924s)
	I0805 22:50:27.258038 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.164775159s)
	I0805 22:50:27.380811 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.185147006s)
	I0805 22:50:27.380913 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.133651614s)
	I0805 22:50:27.893166 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:28.363248 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.046247915s)
	I0805 22:50:28.363292 1566127 addons.go:475] Verifying addon ingress=true in "addons-554168"
	I0805 22:50:28.363508 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.022622903s)
	I0805 22:50:28.363674 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.748866189s)
	I0805 22:50:28.363693 1566127 addons.go:475] Verifying addon metrics-server=true in "addons-554168"
	I0805 22:50:28.363734 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.669676782s)
	I0805 22:50:28.363792 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.644201033s)
	I0805 22:50:28.363823 1566127 addons.go:475] Verifying addon registry=true in "addons-554168"
	I0805 22:50:28.363953 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.357869561s)
	I0805 22:50:28.366042 1566127 out.go:177] * Verifying ingress addon...
	I0805 22:50:28.367314 1566127 out.go:177] * Verifying registry addon...
	I0805 22:50:28.367335 1566127 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-554168 service yakd-dashboard -n yakd-dashboard
	
	I0805 22:50:28.369416 1566127 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0805 22:50:28.371764 1566127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0805 22:50:28.388898 1566127 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0805 22:50:28.388986 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:28.391862 1566127 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0805 22:50:28.391884 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:28.594584 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.322288188s)
	W0805 22:50:28.594628 1566127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 22:50:28.594748 1566127 retry.go:31] will retry after 152.624023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 22:50:28.594867 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.260592354s)
	I0805 22:50:28.748318 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:29.018939 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:29.020127 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:29.107333 1566127 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0805 22:50:29.107516 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:29.139627 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:29.161409 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.205020281s)
	I0805 22:50:29.161445 1566127 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-554168"
	I0805 22:50:29.164477 1566127 out.go:177] * Verifying csi-hostpath-driver addon...
	I0805 22:50:29.167130 1566127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0805 22:50:29.336778 1566127 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0805 22:50:29.336807 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:29.360666 1566127 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0805 22:50:29.442911 1566127 addons.go:234] Setting addon gcp-auth=true in "addons-554168"
	I0805 22:50:29.443041 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:29.444062 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:29.464186 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:29.474753 1566127 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0805 22:50:29.474811 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:29.502042 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:29.503842 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:29.710705 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:29.875401 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:29.879226 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:30.174119 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:30.383063 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:30.383677 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:30.389943 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:30.671054 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:30.874688 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:30.878084 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:31.172252 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:31.374415 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:31.378958 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:31.672004 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:31.887310 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:31.888318 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:32.172791 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:32.242582 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.494215466s)
	I0805 22:50:32.242716 1566127 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.767940214s)
	I0805 22:50:32.245469 1566127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:32.247230 1566127 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0805 22:50:32.249116 1566127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0805 22:50:32.249183 1566127 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0805 22:50:32.281672 1566127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0805 22:50:32.281748 1566127 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0805 22:50:32.310842 1566127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 22:50:32.310862 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0805 22:50:32.331859 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 22:50:32.374451 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:32.378452 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:32.394542 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:32.692487 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:32.890801 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:32.894885 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:32.949924 1566127 addons.go:475] Verifying addon gcp-auth=true in "addons-554168"
	I0805 22:50:32.951822 1566127 out.go:177] * Verifying gcp-auth addon...
	I0805 22:50:32.954667 1566127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0805 22:50:32.962837 1566127 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0805 22:50:32.962867 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:33.172493 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:33.374543 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:33.376598 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:33.458550 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:33.673729 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:33.873449 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:33.878204 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:33.958972 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:34.173912 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:34.374088 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:34.376350 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:34.458416 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:34.673677 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:34.875179 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:34.876095 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:34.890608 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:34.958249 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:35.172057 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:35.374278 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:35.377398 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:35.458275 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:35.678901 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:35.873891 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:35.877125 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:35.958813 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:36.172129 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:36.376504 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:36.377111 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:36.458833 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:36.676847 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:36.873590 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:36.875792 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:36.958142 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:37.171790 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:37.375450 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:37.376464 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:37.389684 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:37.457879 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:37.671718 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:37.873720 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:37.876816 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:37.957914 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:38.171413 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:38.374211 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:38.375856 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:38.457687 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:38.671556 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:38.876475 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:38.880387 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:38.958002 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:39.171727 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:39.373464 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:39.376927 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:39.458407 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:39.673332 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:39.873311 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:39.875713 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:39.889632 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:39.962068 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:40.172326 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:40.376474 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:40.376782 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:40.458432 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:40.671437 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:40.874274 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:40.875752 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:40.958211 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:41.172056 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:41.374340 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:41.376887 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:41.460033 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:41.671248 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:41.873895 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:41.876298 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:41.890057 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:41.958164 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:42.171891 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:42.374597 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:42.377572 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:42.458837 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:42.671697 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:42.874267 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:42.876208 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:42.958489 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:43.171773 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:43.373901 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:43.377368 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:43.457901 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:43.671261 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:43.873438 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:43.876885 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:43.958627 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:44.171605 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:44.374694 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:44.376527 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:44.389137 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:44.458047 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:44.679369 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:44.873715 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:44.876910 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:44.958715 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:45.172001 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:45.376091 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:45.379185 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:45.459069 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:45.671996 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:45.873324 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:45.876591 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:45.959237 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:46.172625 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:46.373440 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:46.376144 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:46.389935 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:46.458757 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:46.671935 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:46.873853 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:46.876390 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:46.958143 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:47.171749 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:47.373851 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:47.376245 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:47.458251 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:47.671610 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:47.874760 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:47.876585 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:47.958706 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:48.172124 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:48.373945 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:48.375971 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:48.458292 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:48.671382 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:48.873242 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:48.875864 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:48.889987 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:48.958455 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:49.171522 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:49.374132 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:49.375849 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:49.458442 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:49.673100 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:49.874339 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:49.876529 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:49.972756 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:50.172202 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:50.376310 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:50.377061 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:50.458778 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:50.671955 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:50.873636 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:50.877043 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:50.958805 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:51.172151 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:51.374769 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:51.376907 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:51.389623 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:51.458231 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:51.672040 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:51.873434 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:51.875638 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:51.958795 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:52.172146 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:52.374758 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:52.375813 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:52.458395 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:52.672218 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:52.873284 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:52.876031 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:52.958640 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:53.171849 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:53.373892 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:53.376103 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.389802 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:53.458022 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:53.671296 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:53.873832 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:53.876978 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.958067 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:54.172127 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:54.374031 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:54.376282 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:54.458225 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:54.671400 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:54.873882 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:54.876310 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:54.958153 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:55.171697 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:55.374121 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:55.375886 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.458821 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:55.671663 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:55.874128 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:55.875921 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.890695 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:55.958018 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:56.172031 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:56.375128 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.377781 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:56.458780 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:56.672114 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:56.873803 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.875742 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:56.958678 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:57.171773 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:57.373322 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:57.375648 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:57.459024 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:57.671641 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:57.873146 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:57.875830 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:57.958640 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:58.173949 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:58.373979 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:58.376526 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:58.389472 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:58.461909 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:58.671944 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:58.873913 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:58.876225 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:58.957943 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:59.171345 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:59.373729 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:59.376883 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:59.458873 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:59.670832 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:59.873638 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:59.876749 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:59.958707 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:00.204058 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:00.383864 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:00.384114 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:00.392279 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:51:00.458532 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:00.671040 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:00.873426 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:00.876657 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:00.958965 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:01.171428 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:01.374410 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:01.375486 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.458281 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:01.671812 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:01.874623 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:01.876454 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.958138 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:02.171805 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:02.374541 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.376380 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:02.459194 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:02.671999 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:02.874966 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.875330 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:02.889457 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:51:02.958801 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:03.172024 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:03.375545 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:03.376285 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:03.459005 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:03.671922 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:03.874663 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:03.876529 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:03.958490 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:04.171543 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:04.374028 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:04.376471 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:04.458854 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:04.671763 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:04.873960 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:04.876662 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:04.889724 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:51:04.958448 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:05.171258 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:05.374142 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:05.375852 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:05.459101 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:05.671181 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:05.873219 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:05.875553 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:05.959805 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:06.171292 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:06.376065 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:06.376873 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:06.458778 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:06.672289 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:06.874660 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:06.876191 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:06.890130 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:51:06.957935 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:07.171736 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:07.374614 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:07.376399 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.458734 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:07.672063 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:07.873703 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:07.876167 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.958390 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:08.192479 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:08.378249 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.383614 1566127 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0805 22:51:08.383641 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:08.394945 1566127 node_ready.go:49] node "addons-554168" has status "Ready":"True"
	I0805 22:51:08.394971 1566127 node_ready.go:38] duration metric: took 45.008683249s for node "addons-554168" to be "Ready" ...
	I0805 22:51:08.394981 1566127 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 22:51:08.409843 1566127 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-prz4h" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:08.513744 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:08.672989 1566127 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0805 22:51:08.673016 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:08.876777 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.880867 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:08.966261 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:09.173090 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:09.379795 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:09.388279 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:09.458512 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:09.672285 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:09.874385 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:09.876929 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:09.964413 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:10.201488 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:10.398423 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:10.399515 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:10.418870 1566127 pod_ready.go:102] pod "coredns-7db6d8ff4d-prz4h" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:10.458285 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:10.674418 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:10.874847 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:10.877424 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:10.920588 1566127 pod_ready.go:92] pod "coredns-7db6d8ff4d-prz4h" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.920668 1566127 pod_ready.go:81] duration metric: took 2.510791132s for pod "coredns-7db6d8ff4d-prz4h" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.920708 1566127 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.932533 1566127 pod_ready.go:92] pod "etcd-addons-554168" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.932627 1566127 pod_ready.go:81] duration metric: took 11.879425ms for pod "etcd-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.932658 1566127 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.939769 1566127 pod_ready.go:92] pod "kube-apiserver-addons-554168" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.939839 1566127 pod_ready.go:81] duration metric: took 7.159137ms for pod "kube-apiserver-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.939867 1566127 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.946886 1566127 pod_ready.go:92] pod "kube-controller-manager-addons-554168" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.946958 1566127 pod_ready.go:81] duration metric: took 7.067832ms for pod "kube-controller-manager-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.946986 1566127 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp29n" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.953739 1566127 pod_ready.go:92] pod "kube-proxy-lp29n" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.953812 1566127 pod_ready.go:81] duration metric: took 6.805501ms for pod "kube-proxy-lp29n" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.953840 1566127 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.959446 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:11.173121 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:11.317590 1566127 pod_ready.go:92] pod "kube-scheduler-addons-554168" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:11.317614 1566127 pod_ready.go:81] duration metric: took 363.753096ms for pod "kube-scheduler-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:11.317627 1566127 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:11.374112 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:11.377178 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:11.458136 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:11.672984 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:11.874345 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:11.877076 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:11.960523 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:12.172468 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:12.374432 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:12.377395 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:12.458292 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:12.675090 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:12.886231 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:12.887700 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:12.959054 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:13.173989 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:13.337661 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:13.376673 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:13.380654 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:13.458926 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:13.673942 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:13.877743 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:13.883346 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:13.958714 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:14.173498 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:14.378059 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:14.385019 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:14.460285 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:14.674713 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:14.878902 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:14.880902 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:14.958778 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:15.176120 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:15.377650 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:15.380881 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:15.459190 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:15.674108 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:15.833823 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:15.882639 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:15.883582 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:15.959956 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:16.174369 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:16.374634 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:16.382465 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:16.459364 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:16.672963 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:16.875326 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:16.878265 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:16.959334 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:17.173596 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:17.374355 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:17.377290 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:17.458048 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:17.672544 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:17.873865 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:17.877502 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:17.958846 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:18.174479 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:18.325339 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:18.384124 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:18.385413 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:18.459362 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:18.675180 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:18.873635 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:18.877174 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:18.959432 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.174065 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.373948 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.383505 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.458878 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.674604 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.885594 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.890601 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.959405 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:20.174807 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:20.379044 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:20.380277 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:20.461514 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:20.674040 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:20.826232 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:20.875349 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:20.880774 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:20.959365 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:21.174552 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:21.378438 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:21.382967 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:21.459977 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:21.674151 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:21.876232 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:21.880690 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:21.959917 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:22.174818 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:22.382349 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:22.391560 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:22.460042 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:22.673889 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:22.874134 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:22.878110 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:22.966277 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:23.176425 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:23.326644 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:23.377275 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:23.379621 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:23.458493 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:23.673846 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:23.874006 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:23.877687 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:23.962000 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:24.172862 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:24.375105 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:24.381855 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:24.460083 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:24.674097 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:24.874840 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:24.879471 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:24.959520 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:25.173704 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:25.374116 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:25.378005 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:25.458585 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:25.672713 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:25.855205 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:25.877259 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:25.880241 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:25.962456 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:26.172849 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:26.375799 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:26.377384 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:26.459394 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:26.672747 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:26.874314 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:26.877388 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:26.962825 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:27.173585 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:27.377264 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:27.382322 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:27.460138 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:27.673671 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:27.875035 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:27.883321 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:27.959207 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:28.175557 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:28.325061 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:28.376264 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:28.379956 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:28.459150 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:28.690306 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:28.875167 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:28.876925 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:28.958386 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:29.173515 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:29.376094 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:29.378486 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:29.460621 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:29.675558 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:29.875716 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:29.878902 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:29.958784 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:30.174827 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:30.376541 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:30.384337 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:30.459290 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:30.673941 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:30.827469 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:30.877849 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:30.887836 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:30.959411 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:31.173452 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:31.375156 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:31.378970 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:31.458947 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:31.706663 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:31.898397 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:31.900230 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:31.964948 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.175160 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.380444 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:32.383869 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:32.458848 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.673744 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.874251 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:32.877208 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:32.958960 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:33.176651 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:33.324185 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:33.375306 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:33.378038 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:33.458682 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:33.674077 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:33.874972 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:33.878943 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:33.958704 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:34.172964 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:34.373751 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:34.377427 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:34.458154 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:34.679341 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:34.876109 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:34.889740 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:34.965056 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:35.173647 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:35.328436 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:35.383676 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:35.385279 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:35.460967 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:35.673554 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:35.874370 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:35.878286 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:35.958564 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:36.173214 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:36.375602 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:36.378208 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:36.458950 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:36.673490 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:36.875924 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:36.878631 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:36.958070 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:37.177913 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:37.373884 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:37.377044 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:37.458663 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:37.673576 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:37.825334 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:37.879213 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:37.885886 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:37.963936 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:38.174790 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:38.376637 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:38.380183 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:38.459436 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:38.675719 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:38.873783 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:38.877871 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:38.960175 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:39.175778 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:39.378270 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:39.380082 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:39.459433 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:39.679750 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:39.875234 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:39.881466 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:39.959025 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:40.174404 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:40.330090 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:40.382393 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:40.383976 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:40.464959 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:40.675276 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:40.877780 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:40.880692 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:40.958584 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:41.177541 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:41.374692 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:41.378048 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:41.459822 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:41.678099 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:41.874486 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:41.876363 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:41.958327 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:42.173452 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:42.376076 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:42.383470 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:42.459301 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:42.673494 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:42.824374 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:42.874317 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:42.878270 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:42.959990 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:43.172659 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:43.378247 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:43.378764 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:43.459041 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:43.673255 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:43.882027 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:43.885425 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:43.959035 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:44.173389 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:44.375228 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:44.377800 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:44.458727 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:44.673416 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:44.874716 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:44.877248 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:44.958059 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:45.183585 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:45.328040 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:45.384622 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:45.398075 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:45.460242 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:45.690926 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:45.874088 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:45.877189 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:45.958272 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.173295 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.375118 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:46.382559 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:46.462162 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.674513 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.880542 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:46.883396 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:46.960470 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:47.173269 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:47.404137 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:47.413718 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:47.458856 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:47.681256 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:47.824703 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:47.874732 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:47.882562 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:47.958678 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.176243 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.376341 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:48.388641 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:48.458469 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.675016 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.878084 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:48.881966 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:48.958605 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:49.174716 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:49.374976 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:49.377631 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:49.458315 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:49.673293 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:49.824963 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:49.874312 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:49.877289 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:49.958736 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:50.188700 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:50.374349 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:50.377319 1566127 kapi.go:107] duration metric: took 1m22.005554259s to wait for kubernetes.io/minikube-addons=registry ...
	I0805 22:51:50.458824 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:50.673563 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:50.874353 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:50.958079 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:51.172996 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:51.375940 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:51.459497 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:51.673960 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:51.874290 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:51.958908 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:52.174840 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:52.326270 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:52.375660 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:52.458600 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:52.674452 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:52.875201 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:52.958908 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:53.173941 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:53.375781 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:53.459576 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:53.673906 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:53.874755 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:53.958581 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:54.174123 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:54.326793 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:54.375367 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:54.459123 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:54.677520 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:54.875472 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:54.960391 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:55.174032 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:55.375119 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:55.469304 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:55.673565 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:55.874350 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:55.958654 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:56.173305 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:56.374777 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:56.458888 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:56.682744 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:56.825703 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:56.876304 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:56.958841 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:57.176537 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:57.375536 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:57.458861 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:57.673312 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:57.879517 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:57.959555 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:58.174408 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:58.375129 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:58.461877 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:58.673770 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:58.874329 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:58.958983 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:59.173863 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:59.324759 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:59.374700 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:59.458588 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:59.672673 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:59.874774 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:59.958581 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:00.189576 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:00.414910 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:00.459354 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:00.675300 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:00.875855 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:00.958871 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:01.175582 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:01.325171 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:01.376813 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:01.459508 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:01.674334 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:01.884488 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:01.958864 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:02.174081 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:02.373937 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:02.458299 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:02.675092 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:02.874621 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:02.959146 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:03.173556 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:03.326139 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:03.374626 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:03.458383 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:03.672623 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:03.873696 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:03.958326 1566127 kapi.go:107] duration metric: took 1m31.003657779s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0805 22:52:03.960031 1566127 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-554168 cluster.
	I0805 22:52:03.961788 1566127 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0805 22:52:03.963383 1566127 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0805 22:52:04.173072 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:04.373945 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:04.672864 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:04.874229 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:05.175458 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:05.374355 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:05.673281 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:05.829304 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:05.874605 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:06.172529 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:06.374455 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:06.673491 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:06.874397 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:07.173161 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:07.375308 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:07.672985 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:07.874036 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:08.172521 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:08.325677 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:08.374566 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:08.672666 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:08.876650 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:09.174359 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:09.375628 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:09.673925 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:09.876082 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:10.172720 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:10.374793 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:10.673305 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:10.824835 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:10.874348 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:11.172707 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:11.374709 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:11.674459 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:11.875179 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:12.173594 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:12.386639 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:12.673874 1566127 kapi.go:107] duration metric: took 1m43.506743484s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0805 22:52:12.874448 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:13.326090 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:13.374514 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:13.874478 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:14.374043 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:14.877395 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:15.338072 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:15.376290 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:15.874773 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:16.374438 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:16.874350 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:17.374934 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:17.827975 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:17.875237 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.374313 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.832928 1566127 pod_ready.go:92] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"True"
	I0805 22:52:18.832959 1566127 pod_ready.go:81] duration metric: took 1m7.51532442s for pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace to be "Ready" ...
	I0805 22:52:18.832972 1566127 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vngm6" in "kube-system" namespace to be "Ready" ...
	I0805 22:52:18.842526 1566127 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-vngm6" in "kube-system" namespace has status "Ready":"True"
	I0805 22:52:18.842554 1566127 pod_ready.go:81] duration metric: took 9.572401ms for pod "nvidia-device-plugin-daemonset-vngm6" in "kube-system" namespace to be "Ready" ...
	I0805 22:52:18.842577 1566127 pod_ready.go:38] duration metric: took 1m10.447580242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 22:52:18.842593 1566127 api_server.go:52] waiting for apiserver process to appear ...
	I0805 22:52:18.843273 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 22:52:18.843345 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 22:52:18.893192 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.954292 1566127 cri.go:89] found id: "9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:18.954317 1566127 cri.go:89] found id: ""
	I0805 22:52:18.954327 1566127 logs.go:276] 1 containers: [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19]
	I0805 22:52:18.954948 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:18.976360 1566127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 22:52:18.976435 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 22:52:19.112787 1566127 cri.go:89] found id: "8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:19.112810 1566127 cri.go:89] found id: ""
	I0805 22:52:19.112819 1566127 logs.go:276] 1 containers: [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778]
	I0805 22:52:19.112897 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.125060 1566127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 22:52:19.125147 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 22:52:19.202655 1566127 cri.go:89] found id: "09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:19.202681 1566127 cri.go:89] found id: ""
	I0805 22:52:19.202689 1566127 logs.go:276] 1 containers: [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080]
	I0805 22:52:19.202746 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.206922 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 22:52:19.206997 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 22:52:19.286716 1566127 cri.go:89] found id: "0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:19.286740 1566127 cri.go:89] found id: ""
	I0805 22:52:19.286749 1566127 logs.go:276] 1 containers: [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde]
	I0805 22:52:19.286814 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.292053 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 22:52:19.292138 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 22:52:19.355216 1566127 cri.go:89] found id: "42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:19.355241 1566127 cri.go:89] found id: ""
	I0805 22:52:19.355249 1566127 logs.go:276] 1 containers: [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb]
	I0805 22:52:19.355316 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.361963 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 22:52:19.362039 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 22:52:19.375748 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:19.426252 1566127 cri.go:89] found id: "63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:19.426280 1566127 cri.go:89] found id: ""
	I0805 22:52:19.426289 1566127 logs.go:276] 1 containers: [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef]
	I0805 22:52:19.426358 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.434080 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 22:52:19.434170 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 22:52:19.496471 1566127 cri.go:89] found id: "b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:19.496493 1566127 cri.go:89] found id: ""
	I0805 22:52:19.496508 1566127 logs.go:276] 1 containers: [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86]
	I0805 22:52:19.496619 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.505747 1566127 logs.go:123] Gathering logs for describe nodes ...
	I0805 22:52:19.505773 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 22:52:19.770483 1566127 logs.go:123] Gathering logs for kube-apiserver [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19] ...
	I0805 22:52:19.770514 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:19.853681 1566127 logs.go:123] Gathering logs for kube-controller-manager [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef] ...
	I0805 22:52:19.853722 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:19.874955 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:19.949182 1566127 logs.go:123] Gathering logs for container status ...
	I0805 22:52:19.949263 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 22:52:20.084140 1566127 logs.go:123] Gathering logs for kubelet ...
	I0805 22:52:20.084217 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0805 22:52:20.145430 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.107369    1564 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.145647 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.107419    1564 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.150814 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140125    1564 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151055 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140169    1564 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151243 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140874    1564 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151446 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140922    1564 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151632 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140989    1564 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151841 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141013    1564 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152006 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.141116    1564 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152191 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152381 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152663 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152867 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.153076 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:20.197558 1566127 logs.go:123] Gathering logs for etcd [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778] ...
	I0805 22:52:20.197602 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:20.273120 1566127 logs.go:123] Gathering logs for coredns [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080] ...
	I0805 22:52:20.276083 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:20.332603 1566127 logs.go:123] Gathering logs for kube-scheduler [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde] ...
	I0805 22:52:20.332690 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:20.378076 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:20.417592 1566127 logs.go:123] Gathering logs for kube-proxy [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb] ...
	I0805 22:52:20.417625 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:20.484498 1566127 logs.go:123] Gathering logs for kindnet [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86] ...
	I0805 22:52:20.484530 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:20.554925 1566127 logs.go:123] Gathering logs for CRI-O ...
	I0805 22:52:20.554958 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 22:52:20.677357 1566127 logs.go:123] Gathering logs for dmesg ...
	I0805 22:52:20.677404 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 22:52:20.710010 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:20.710038 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0805 22:52:20.710087 1566127 out.go:239] X Problems detected in kubelet:
	W0805 22:52:20.710100 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.710110 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.710119 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.710126 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.710132 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:20.710138 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:20.710144 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:52:20.876013 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:21.374257 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:21.874250 1566127 kapi.go:107] duration metric: took 1m53.504836551s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0805 22:52:21.876178 1566127 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0805 22:52:21.877785 1566127 addons.go:510] duration metric: took 2m0.685991291s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin storage-provisioner-rancher ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0805 22:52:30.711717 1566127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 22:52:30.726224 1566127 api_server.go:72] duration metric: took 2m9.534856939s to wait for apiserver process to appear ...
	I0805 22:52:30.726250 1566127 api_server.go:88] waiting for apiserver healthz status ...
	I0805 22:52:30.726283 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 22:52:30.726337 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 22:52:30.764705 1566127 cri.go:89] found id: "9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:30.764726 1566127 cri.go:89] found id: ""
	I0805 22:52:30.764734 1566127 logs.go:276] 1 containers: [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19]
	I0805 22:52:30.764791 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.768211 1566127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 22:52:30.768278 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 22:52:30.808094 1566127 cri.go:89] found id: "8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:30.808114 1566127 cri.go:89] found id: ""
	I0805 22:52:30.808122 1566127 logs.go:276] 1 containers: [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778]
	I0805 22:52:30.808178 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.811721 1566127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 22:52:30.811796 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 22:52:30.850590 1566127 cri.go:89] found id: "09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:30.850614 1566127 cri.go:89] found id: ""
	I0805 22:52:30.850622 1566127 logs.go:276] 1 containers: [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080]
	I0805 22:52:30.850679 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.854292 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 22:52:30.854365 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 22:52:30.893275 1566127 cri.go:89] found id: "0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:30.893299 1566127 cri.go:89] found id: ""
	I0805 22:52:30.893307 1566127 logs.go:276] 1 containers: [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde]
	I0805 22:52:30.893368 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.896962 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 22:52:30.897035 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 22:52:30.937066 1566127 cri.go:89] found id: "42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:30.937089 1566127 cri.go:89] found id: ""
	I0805 22:52:30.937097 1566127 logs.go:276] 1 containers: [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb]
	I0805 22:52:30.937160 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.940629 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 22:52:30.940748 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 22:52:30.979170 1566127 cri.go:89] found id: "63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:30.979193 1566127 cri.go:89] found id: ""
	I0805 22:52:30.979209 1566127 logs.go:276] 1 containers: [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef]
	I0805 22:52:30.979271 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.982580 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 22:52:30.982644 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 22:52:31.023003 1566127 cri.go:89] found id: "b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:31.023026 1566127 cri.go:89] found id: ""
	I0805 22:52:31.023033 1566127 logs.go:276] 1 containers: [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86]
	I0805 22:52:31.023094 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:31.026613 1566127 logs.go:123] Gathering logs for kube-apiserver [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19] ...
	I0805 22:52:31.026647 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:31.078975 1566127 logs.go:123] Gathering logs for etcd [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778] ...
	I0805 22:52:31.079015 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:31.126524 1566127 logs.go:123] Gathering logs for coredns [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080] ...
	I0805 22:52:31.126562 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:31.172237 1566127 logs.go:123] Gathering logs for kube-scheduler [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde] ...
	I0805 22:52:31.172267 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:31.222815 1566127 logs.go:123] Gathering logs for kube-controller-manager [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef] ...
	I0805 22:52:31.222847 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:31.295478 1566127 logs.go:123] Gathering logs for kindnet [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86] ...
	I0805 22:52:31.295517 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:31.343130 1566127 logs.go:123] Gathering logs for dmesg ...
	I0805 22:52:31.343166 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 22:52:31.362607 1566127 logs.go:123] Gathering logs for describe nodes ...
	I0805 22:52:31.362637 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 22:52:31.517835 1566127 logs.go:123] Gathering logs for container status ...
	I0805 22:52:31.517868 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 22:52:31.580676 1566127 logs.go:123] Gathering logs for CRI-O ...
	I0805 22:52:31.580713 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 22:52:31.676789 1566127 logs.go:123] Gathering logs for kubelet ...
	I0805 22:52:31.676868 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0805 22:52:31.724835 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.107369    1564 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.725112 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.107419    1564 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.727467 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140125    1564 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.727659 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140169    1564 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.727844 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140874    1564 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728047 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140922    1564 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728238 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140989    1564 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728448 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141013    1564 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728621 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.141116    1564 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728807 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728993 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.729202 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.729387 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.729592 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:31.763081 1566127 logs.go:123] Gathering logs for kube-proxy [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb] ...
	I0805 22:52:31.763110 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:31.800452 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:31.800478 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0805 22:52:31.800526 1566127 out.go:239] X Problems detected in kubelet:
	W0805 22:52:31.800542 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.800577 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.800594 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.800602 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.800611 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:31.800623 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:31.800629 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:52:41.801964 1566127 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0805 22:52:41.810635 1566127 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0805 22:52:41.812208 1566127 api_server.go:141] control plane version: v1.30.3
	I0805 22:52:41.812240 1566127 api_server.go:131] duration metric: took 11.085981662s to wait for apiserver health ...
	I0805 22:52:41.812249 1566127 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 22:52:41.812271 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 22:52:41.812334 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 22:52:41.854122 1566127 cri.go:89] found id: "9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:41.854142 1566127 cri.go:89] found id: ""
	I0805 22:52:41.854150 1566127 logs.go:276] 1 containers: [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19]
	I0805 22:52:41.854210 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:41.857636 1566127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 22:52:41.857707 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 22:52:41.895789 1566127 cri.go:89] found id: "8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:41.895865 1566127 cri.go:89] found id: ""
	I0805 22:52:41.895885 1566127 logs.go:276] 1 containers: [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778]
	I0805 22:52:41.895974 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:41.899389 1566127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 22:52:41.899457 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 22:52:41.942516 1566127 cri.go:89] found id: "09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:41.942588 1566127 cri.go:89] found id: ""
	I0805 22:52:41.942603 1566127 logs.go:276] 1 containers: [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080]
	I0805 22:52:41.942664 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:41.946535 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 22:52:41.946620 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 22:52:41.987110 1566127 cri.go:89] found id: "0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:41.987133 1566127 cri.go:89] found id: ""
	I0805 22:52:41.987142 1566127 logs.go:276] 1 containers: [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde]
	I0805 22:52:41.987204 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:41.990884 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 22:52:41.990958 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 22:52:42.055800 1566127 cri.go:89] found id: "42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:42.055823 1566127 cri.go:89] found id: ""
	I0805 22:52:42.055831 1566127 logs.go:276] 1 containers: [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb]
	I0805 22:52:42.055889 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:42.059860 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 22:52:42.059940 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 22:52:42.104956 1566127 cri.go:89] found id: "63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:42.104984 1566127 cri.go:89] found id: ""
	I0805 22:52:42.104991 1566127 logs.go:276] 1 containers: [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef]
	I0805 22:52:42.105059 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:42.109486 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 22:52:42.109584 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 22:52:42.159575 1566127 cri.go:89] found id: "b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:42.159600 1566127 cri.go:89] found id: ""
	I0805 22:52:42.159609 1566127 logs.go:276] 1 containers: [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86]
	I0805 22:52:42.159677 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:42.164177 1566127 logs.go:123] Gathering logs for dmesg ...
	I0805 22:52:42.164214 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 22:52:42.185524 1566127 logs.go:123] Gathering logs for describe nodes ...
	I0805 22:52:42.185717 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 22:52:42.343018 1566127 logs.go:123] Gathering logs for kube-apiserver [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19] ...
	I0805 22:52:42.343051 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:42.398119 1566127 logs.go:123] Gathering logs for kindnet [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86] ...
	I0805 22:52:42.398153 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:42.448477 1566127 logs.go:123] Gathering logs for CRI-O ...
	I0805 22:52:42.448511 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 22:52:42.541758 1566127 logs.go:123] Gathering logs for kubelet ...
	I0805 22:52:42.541797 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0805 22:52:42.593894 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.107369    1564 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.594635 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.107419    1564 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597126 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140125    1564 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597324 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140169    1564 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597511 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140874    1564 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597719 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140922    1564 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597937 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140989    1564 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598159 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141013    1564 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598349 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.141116    1564 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598536 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598724 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598967 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.599159 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.599366 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:42.634056 1566127 logs.go:123] Gathering logs for etcd [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778] ...
	I0805 22:52:42.634088 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:42.681807 1566127 logs.go:123] Gathering logs for coredns [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080] ...
	I0805 22:52:42.681841 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:42.734224 1566127 logs.go:123] Gathering logs for kube-scheduler [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde] ...
	I0805 22:52:42.734255 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:42.781028 1566127 logs.go:123] Gathering logs for kube-proxy [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb] ...
	I0805 22:52:42.781065 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:42.819987 1566127 logs.go:123] Gathering logs for kube-controller-manager [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef] ...
	I0805 22:52:42.820017 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:42.912058 1566127 logs.go:123] Gathering logs for container status ...
	I0805 22:52:42.912089 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 22:52:42.962042 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:42.962071 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0805 22:52:42.962144 1566127 out.go:239] X Problems detected in kubelet:
	W0805 22:52:42.962160 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.962167 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.962176 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.962189 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.962362 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:42.962371 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:42.962382 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:52:52.976729 1566127 system_pods.go:59] 18 kube-system pods found
	I0805 22:52:52.976775 1566127 system_pods.go:61] "coredns-7db6d8ff4d-prz4h" [278434ff-e033-485a-b4bc-320db42e8d40] Running
	I0805 22:52:52.976782 1566127 system_pods.go:61] "csi-hostpath-attacher-0" [08e40914-ba9f-4ff2-88ef-d16dc5d650ef] Running
	I0805 22:52:52.976787 1566127 system_pods.go:61] "csi-hostpath-resizer-0" [1c4036fd-0450-4070-bea9-d46b5d5a51a6] Running
	I0805 22:52:52.976792 1566127 system_pods.go:61] "csi-hostpathplugin-pz5t5" [3d8afa71-9759-47b2-840d-51f8c0a66d69] Running
	I0805 22:52:52.976799 1566127 system_pods.go:61] "etcd-addons-554168" [aa854717-a161-49ba-b27b-91967097bffe] Running
	I0805 22:52:52.976805 1566127 system_pods.go:61] "kindnet-jtck6" [6a21b8aa-054e-4f1d-88df-3b7ace40541b] Running
	I0805 22:52:52.976810 1566127 system_pods.go:61] "kube-apiserver-addons-554168" [a31835e7-19cf-4813-918e-3c3cb3013d45] Running
	I0805 22:52:52.976820 1566127 system_pods.go:61] "kube-controller-manager-addons-554168" [314b12cf-3dfc-45fe-9d28-aa0a7fdb65d5] Running
	I0805 22:52:52.976825 1566127 system_pods.go:61] "kube-ingress-dns-minikube" [fa78f0fa-4656-494a-8b6f-92f40e4c8f8b] Running
	I0805 22:52:52.976833 1566127 system_pods.go:61] "kube-proxy-lp29n" [327a3427-7590-4179-951e-c53d7d42f072] Running
	I0805 22:52:52.976838 1566127 system_pods.go:61] "kube-scheduler-addons-554168" [c1e4f7f4-71e0-4719-a35b-6224c4f46acc] Running
	I0805 22:52:52.976845 1566127 system_pods.go:61] "metrics-server-c59844bb4-4dgqd" [87a4cfae-8eae-4755-8efe-9e869f5ea69e] Running
	I0805 22:52:52.976850 1566127 system_pods.go:61] "nvidia-device-plugin-daemonset-vngm6" [bc68d922-6356-4b7c-a0af-9f0e70a94548] Running
	I0805 22:52:52.976854 1566127 system_pods.go:61] "registry-698f998955-x6xxq" [4ae86949-feca-437d-8b71-1b2bea971616] Running
	I0805 22:52:52.976858 1566127 system_pods.go:61] "registry-proxy-5pp4p" [03aac67e-c40e-4703-995f-88bab30fa562] Running
	I0805 22:52:52.976872 1566127 system_pods.go:61] "snapshot-controller-745499f584-lbm9t" [4041ad86-2418-4108-a5e7-e00452c8eb62] Running
	I0805 22:52:52.976882 1566127 system_pods.go:61] "snapshot-controller-745499f584-lbzrh" [54002dfa-8dcf-42f4-b80b-24154275fc76] Running
	I0805 22:52:52.976886 1566127 system_pods.go:61] "storage-provisioner" [97eb7b7b-4406-412d-94ec-49b93cfc1495] Running
	I0805 22:52:52.976893 1566127 system_pods.go:74] duration metric: took 11.164637418s to wait for pod list to return data ...
	I0805 22:52:52.976901 1566127 default_sa.go:34] waiting for default service account to be created ...
	I0805 22:52:52.979340 1566127 default_sa.go:45] found service account: "default"
	I0805 22:52:52.979365 1566127 default_sa.go:55] duration metric: took 2.457125ms for default service account to be created ...
	I0805 22:52:52.979375 1566127 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 22:52:52.988986 1566127 system_pods.go:86] 18 kube-system pods found
	I0805 22:52:52.989020 1566127 system_pods.go:89] "coredns-7db6d8ff4d-prz4h" [278434ff-e033-485a-b4bc-320db42e8d40] Running
	I0805 22:52:52.989027 1566127 system_pods.go:89] "csi-hostpath-attacher-0" [08e40914-ba9f-4ff2-88ef-d16dc5d650ef] Running
	I0805 22:52:52.989032 1566127 system_pods.go:89] "csi-hostpath-resizer-0" [1c4036fd-0450-4070-bea9-d46b5d5a51a6] Running
	I0805 22:52:52.989037 1566127 system_pods.go:89] "csi-hostpathplugin-pz5t5" [3d8afa71-9759-47b2-840d-51f8c0a66d69] Running
	I0805 22:52:52.989042 1566127 system_pods.go:89] "etcd-addons-554168" [aa854717-a161-49ba-b27b-91967097bffe] Running
	I0805 22:52:52.989047 1566127 system_pods.go:89] "kindnet-jtck6" [6a21b8aa-054e-4f1d-88df-3b7ace40541b] Running
	I0805 22:52:52.989051 1566127 system_pods.go:89] "kube-apiserver-addons-554168" [a31835e7-19cf-4813-918e-3c3cb3013d45] Running
	I0805 22:52:52.989055 1566127 system_pods.go:89] "kube-controller-manager-addons-554168" [314b12cf-3dfc-45fe-9d28-aa0a7fdb65d5] Running
	I0805 22:52:52.989059 1566127 system_pods.go:89] "kube-ingress-dns-minikube" [fa78f0fa-4656-494a-8b6f-92f40e4c8f8b] Running
	I0805 22:52:52.989063 1566127 system_pods.go:89] "kube-proxy-lp29n" [327a3427-7590-4179-951e-c53d7d42f072] Running
	I0805 22:52:52.989068 1566127 system_pods.go:89] "kube-scheduler-addons-554168" [c1e4f7f4-71e0-4719-a35b-6224c4f46acc] Running
	I0805 22:52:52.989074 1566127 system_pods.go:89] "metrics-server-c59844bb4-4dgqd" [87a4cfae-8eae-4755-8efe-9e869f5ea69e] Running
	I0805 22:52:52.989079 1566127 system_pods.go:89] "nvidia-device-plugin-daemonset-vngm6" [bc68d922-6356-4b7c-a0af-9f0e70a94548] Running
	I0805 22:52:52.989087 1566127 system_pods.go:89] "registry-698f998955-x6xxq" [4ae86949-feca-437d-8b71-1b2bea971616] Running
	I0805 22:52:52.989092 1566127 system_pods.go:89] "registry-proxy-5pp4p" [03aac67e-c40e-4703-995f-88bab30fa562] Running
	I0805 22:52:52.989099 1566127 system_pods.go:89] "snapshot-controller-745499f584-lbm9t" [4041ad86-2418-4108-a5e7-e00452c8eb62] Running
	I0805 22:52:52.989104 1566127 system_pods.go:89] "snapshot-controller-745499f584-lbzrh" [54002dfa-8dcf-42f4-b80b-24154275fc76] Running
	I0805 22:52:52.989114 1566127 system_pods.go:89] "storage-provisioner" [97eb7b7b-4406-412d-94ec-49b93cfc1495] Running
	I0805 22:52:52.989122 1566127 system_pods.go:126] duration metric: took 9.740959ms to wait for k8s-apps to be running ...
	I0805 22:52:52.989130 1566127 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 22:52:52.989195 1566127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 22:52:53.002197 1566127 system_svc.go:56] duration metric: took 13.034415ms WaitForService to wait for kubelet
	I0805 22:52:53.002229 1566127 kubeadm.go:582] duration metric: took 2m31.810865348s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 22:52:53.002253 1566127 node_conditions.go:102] verifying NodePressure condition ...
	I0805 22:52:53.007160 1566127 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0805 22:52:53.007196 1566127 node_conditions.go:123] node cpu capacity is 2
	I0805 22:52:53.007208 1566127 node_conditions.go:105] duration metric: took 4.948767ms to run NodePressure ...
	I0805 22:52:53.007223 1566127 start.go:241] waiting for startup goroutines ...
	I0805 22:52:53.007230 1566127 start.go:246] waiting for cluster config update ...
	I0805 22:52:53.007247 1566127 start.go:255] writing updated cluster config ...
	I0805 22:52:53.007580 1566127 ssh_runner.go:195] Run: rm -f paused
	I0805 22:52:53.323183 1566127 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 22:52:53.326795 1566127 out.go:177] * Done! kubectl is now configured to use "addons-554168" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 22:57:14 addons-554168 crio[967]: time="2024-08-05 22:57:14.432825969Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=32c6db50-95b8-4d3e-b4fd-79f85dbcf242 name=/runtime.v1.ImageService/PullImage
	Aug 05 22:57:14 addons-554168 crio[967]: time="2024-08-05 22:57:14.435614900Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Aug 05 22:57:14 addons-554168 crio[967]: time="2024-08-05 22:57:14.708352359Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Aug 05 22:57:15 addons-554168 crio[967]: time="2024-08-05 22:57:15.155384780Z" level=info msg="Stopping container: 3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95 (timeout: 30s)" id=011ba478-a832-45ba-9c1c-6201e4891f74 name=/runtime.v1.RuntimeService/StopContainer
	Aug 05 22:57:15 addons-554168 conmon[3446]: conmon 3e5a7a5c7f4808f41acd <ninfo>: container 3457 exited with status 1
	Aug 05 22:57:15 addons-554168 crio[967]: time="2024-08-05 22:57:15.301388374Z" level=info msg="Stopped container 3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=011ba478-a832-45ba-9c1c-6201e4891f74 name=/runtime.v1.RuntimeService/StopContainer
	Aug 05 22:57:15 addons-554168 crio[967]: time="2024-08-05 22:57:15.301962818Z" level=info msg="Stopping pod sandbox: 02744ce5792767fa9b2d00ecc74ff603ad2e991c71a1701f5067ac415d3a98d9" id=20098544-f2ae-4282-970b-06c3229eb620 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 22:57:15 addons-554168 crio[967]: time="2024-08-05 22:57:15.306574683Z" level=info msg="Stopped pod sandbox: 02744ce5792767fa9b2d00ecc74ff603ad2e991c71a1701f5067ac415d3a98d9" id=20098544-f2ae-4282-970b-06c3229eb620 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 22:57:15 addons-554168 crio[967]: time="2024-08-05 22:57:15.713172304Z" level=info msg="Removing container: 3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95" id=75dd2ae8-28f1-47ef-81da-e818f79eb91a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 05 22:57:15 addons-554168 crio[967]: time="2024-08-05 22:57:15.734226279Z" level=info msg="Removed container 3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=75dd2ae8-28f1-47ef-81da-e818f79eb91a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 05 22:57:17 addons-554168 crio[967]: time="2024-08-05 22:57:17.410085705Z" level=info msg="Stopping container: 03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e (timeout: 2s)" id=5802d24f-52f3-420d-8761-48ee8544881d name=/runtime.v1.RuntimeService/StopContainer
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.416664864Z" level=warning msg="Stopping container 03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=5802d24f-52f3-420d-8761-48ee8544881d name=/runtime.v1.RuntimeService/StopContainer
	Aug 05 22:57:19 addons-554168 conmon[5284]: conmon 03dd3a51befe8d8b4de6 <ninfo>: container 5295 exited with status 137
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.555025591Z" level=info msg="Stopped container 03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e: ingress-nginx/ingress-nginx-controller-6d9bd977d4-kpd65/controller" id=5802d24f-52f3-420d-8761-48ee8544881d name=/runtime.v1.RuntimeService/StopContainer
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.555539760Z" level=info msg="Stopping pod sandbox: 9ce978822d1b318a30ba92ee898cae58ff914fffc015723c75611a17dee9f1bc" id=caa8fd77-1ba9-46be-8a3f-8dfd9d6a5f91 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.558823604Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-2ENDVJ626GLWKOM7 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-GVG3WXLQRYCF6O7N - [0:0]\n-X KUBE-HP-GVG3WXLQRYCF6O7N\n-X KUBE-HP-2ENDVJ626GLWKOM7\nCOMMIT\n"
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.572407951Z" level=info msg="Closing host port tcp:80"
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.572463286Z" level=info msg="Closing host port tcp:443"
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.574940056Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.574973410Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.575143057Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6d9bd977d4-kpd65 Namespace:ingress-nginx ID:9ce978822d1b318a30ba92ee898cae58ff914fffc015723c75611a17dee9f1bc UID:1ef33513-6701-4d18-97d8-2adbd5490d2c NetNS:/var/run/netns/b713627b-2236-446f-953b-8ea55c6432a3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.575283003Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6d9bd977d4-kpd65 from CNI network \"kindnet\" (type=ptp)"
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.602874914Z" level=info msg="Stopped pod sandbox: 9ce978822d1b318a30ba92ee898cae58ff914fffc015723c75611a17dee9f1bc" id=caa8fd77-1ba9-46be-8a3f-8dfd9d6a5f91 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.723813247Z" level=info msg="Removing container: 03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e" id=8e5d6f3c-4016-4983-bf77-017388740001 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 05 22:57:19 addons-554168 crio[967]: time="2024-08-05 22:57:19.740649143Z" level=info msg="Removed container 03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e: ingress-nginx/ingress-nginx-controller-6d9bd977d4-kpd65/controller" id=8e5d6f3c-4016-4983-bf77-017388740001 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	80848b7d83236       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   340c9d53081ff       nginx
	0fc837ff7f7da       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 minutes ago       Running             headlamp                  0                   9ddbfad4c53c8       headlamp-9d868696f-xxnxt
	6f5a67fac3455       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago       Running             busybox                   0                   d46f0383b9df2       busybox
	b16de6434aef8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              patch                     0                   44739f69558bd       ingress-nginx-admission-patch-b9hcm
	daaa1393fe81b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                    0                   902568ebc6a9e       ingress-nginx-admission-create-qn994
	441fa6b6eb00a       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        6 minutes ago       Running             metrics-server            0                   f926f727e2a7d       metrics-server-c59844bb4-4dgqd
	09cd48a169823       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             6 minutes ago       Running             coredns                   0                   35e32eabd0e24       coredns-7db6d8ff4d-prz4h
	dba64b3f42ca1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago       Running             storage-provisioner       0                   2be7b67b90ad9       storage-provisioner
	b2d8829265bce       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3                           6 minutes ago       Running             kindnet-cni               0                   4eca3dce0cb19       kindnet-jtck6
	42d10724c43ba       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                             7 minutes ago       Running             kube-proxy                0                   4025494cb8316       kube-proxy-lp29n
	0371ec481c801       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                             7 minutes ago       Running             kube-scheduler            0                   2436715e9647f       kube-scheduler-addons-554168
	8503af1c18aed       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             7 minutes ago       Running             etcd                      0                   c96411bbc36fb       etcd-addons-554168
	9046f4aa4a92a       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                             7 minutes ago       Running             kube-apiserver            0                   afa62b12bb725       kube-apiserver-addons-554168
	63e045f9a9fa9       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                             7 minutes ago       Running             kube-controller-manager   0                   e4615108cd183       kube-controller-manager-addons-554168
	
	
	==> coredns [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080] <==
	[INFO] 10.244.0.14:37326 - 44980 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002181377s
	[INFO] 10.244.0.14:35169 - 22707 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000110021s
	[INFO] 10.244.0.14:35169 - 46514 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000097944s
	[INFO] 10.244.0.14:49280 - 1843 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122706s
	[INFO] 10.244.0.14:49280 - 33079 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048221s
	[INFO] 10.244.0.14:54000 - 2022 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083338s
	[INFO] 10.244.0.14:54000 - 45792 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000436861s
	[INFO] 10.244.0.14:57547 - 53254 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107166s
	[INFO] 10.244.0.14:57547 - 4352 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000227723s
	[INFO] 10.244.0.14:52631 - 36119 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001578511s
	[INFO] 10.244.0.14:52631 - 31768 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001663024s
	[INFO] 10.244.0.14:51829 - 15254 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000131757s
	[INFO] 10.244.0.14:51829 - 62353 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169155s
	[INFO] 10.244.0.19:48977 - 27056 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004180922s
	[INFO] 10.244.0.19:46068 - 29605 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004129722s
	[INFO] 10.244.0.19:59931 - 21042 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166734s
	[INFO] 10.244.0.19:35098 - 12053 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001066s
	[INFO] 10.244.0.19:39743 - 22944 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121664s
	[INFO] 10.244.0.19:37699 - 31205 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120762s
	[INFO] 10.244.0.19:45400 - 41622 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003696152s
	[INFO] 10.244.0.19:59312 - 10551 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003016445s
	[INFO] 10.244.0.19:58918 - 55901 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.00091638s
	[INFO] 10.244.0.19:53229 - 14043 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001047259s
	[INFO] 10.244.0.22:59766 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000219493s
	[INFO] 10.244.0.22:43408 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127539s
	
	
	==> describe nodes <==
	Name:               addons-554168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-554168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=addons-554168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T22_50_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-554168
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 22:50:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-554168
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 22:57:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 22:55:13 +0000   Mon, 05 Aug 2024 22:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 22:55:13 +0000   Mon, 05 Aug 2024 22:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 22:55:13 +0000   Mon, 05 Aug 2024 22:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 22:55:13 +0000   Mon, 05 Aug 2024 22:51:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-554168
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6ac9b7fb0e1449fb7e688e34a1cf693
	  System UUID:                9c3ced50-cdba-4701-9230-5543127749e7
	  Boot ID:                    ab3fa9fd-00f6-443b-af0d-60e87e17630c
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  default                     hello-world-app-6778b5fc9f-lmjj4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  headlamp                    headlamp-9d868696f-xxnxt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 coredns-7db6d8ff4d-prz4h                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m4s
	  kube-system                 etcd-addons-554168                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m17s
	  kube-system                 kindnet-jtck6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m4s
	  kube-system                 kube-apiserver-addons-554168             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-controller-manager-addons-554168    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-proxy-lp29n                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-scheduler-addons-554168             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 metrics-server-c59844bb4-4dgqd           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m58s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m25s (x8 over 7m25s)  kubelet          Node addons-554168 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m25s (x8 over 7m25s)  kubelet          Node addons-554168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m25s (x8 over 7m25s)  kubelet          Node addons-554168 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m17s                  kubelet          Node addons-554168 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s                  kubelet          Node addons-554168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m17s                  kubelet          Node addons-554168 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m5s                   node-controller  Node addons-554168 event: Registered Node addons-554168 in Controller
	  Normal  NodeReady                6m16s                  kubelet          Node addons-554168 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000670] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000862] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=000000008997b551
	[  +0.001025] FS-Cache: N-key=[8] 'e8633b0000000000'
	[  +0.003877] FS-Cache: Duplicate cookie detected
	[  +0.000695] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000909] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000c495d5fa
	[  +0.000976] FS-Cache: O-key=[8] 'e8633b0000000000'
	[  +0.000655] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000881] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=00000000c84903e3
	[  +0.000991] FS-Cache: N-key=[8] 'e8633b0000000000'
	[  +2.077764] FS-Cache: Duplicate cookie detected
	[  +0.000839] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000c4c8673a
	[  +0.001004] FS-Cache: O-key=[8] 'e5633b0000000000'
	[  +0.000662] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000868] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=00000000b02f196c
	[  +0.001016] FS-Cache: N-key=[8] 'e5633b0000000000'
	[  +0.396957] FS-Cache: Duplicate cookie detected
	[  +0.000666] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000938] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000d829204a
	[  +0.001050] FS-Cache: O-key=[8] 'ed633b0000000000'
	[  +0.000691] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000884] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=000000008997b551
	[  +0.000977] FS-Cache: N-key=[8] 'ed633b0000000000'
	[Aug 5 21:59] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778] <==
	{"level":"info","ts":"2024-08-05T22:50:00.192639Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-05T22:50:00.192848Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-05T22:50:00.248878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T22:50:00.248929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T22:50:00.248947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-05T22:50:00.248969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T22:50:00.248978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-05T22:50:00.248995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-05T22:50:00.249005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-05T22:50:00.252927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T22:50:00.252908Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-554168 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T22:50:00.254395Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T22:50:00.254431Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:50:00.254845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T22:50:00.254908Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T22:50:00.265474Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-05T22:50:00.266634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:50:00.26756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:50:00.267751Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:50:00.272597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T22:50:24.067518Z","caller":"traceutil/trace.go:171","msg":"trace[2023474310] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"144.835509ms","start":"2024-08-05T22:50:23.922665Z","end":"2024-08-05T22:50:24.067501Z","steps":["trace[2023474310] 'process raft request'  (duration: 97.665994ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:50:24.067894Z","caller":"traceutil/trace.go:171","msg":"trace[1869494953] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"145.082579ms","start":"2024-08-05T22:50:23.922801Z","end":"2024-08-05T22:50:24.067884Z","steps":["trace[1869494953] 'process raft request'  (duration: 106.277102ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:50:24.663999Z","caller":"traceutil/trace.go:171","msg":"trace[1964051575] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"110.391315ms","start":"2024-08-05T22:50:24.553586Z","end":"2024-08-05T22:50:24.663977Z","steps":["trace[1964051575] 'process raft request'  (duration: 109.943156ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:50:24.664216Z","caller":"traceutil/trace.go:171","msg":"trace[1141828130] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"110.561414ms","start":"2024-08-05T22:50:24.553647Z","end":"2024-08-05T22:50:24.664208Z","steps":["trace[1141828130] 'process raft request'  (duration: 109.97783ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:50:24.664363Z","caller":"traceutil/trace.go:171","msg":"trace[173760013] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"110.671993ms","start":"2024-08-05T22:50:24.553684Z","end":"2024-08-05T22:50:24.664356Z","steps":["trace[173760013] 'process raft request'  (duration: 109.984304ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:57:24 up  7:39,  0 users,  load average: 0.32, 1.00, 0.99
	Linux addons-554168 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86] <==
	E0805 22:56:05.972543       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0805 22:56:07.577867       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:56:07.577906       1 main.go:299] handling current node
	I0805 22:56:17.577912       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:56:17.577952       1 main.go:299] handling current node
	I0805 22:56:27.578128       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:56:27.578162       1 main.go:299] handling current node
	W0805 22:56:36.592307       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0805 22:56:36.592344       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0805 22:56:37.577618       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:56:37.577652       1 main.go:299] handling current node
	W0805 22:56:37.938892       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0805 22:56:37.938931       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0805 22:56:47.578065       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:56:47.578103       1 main.go:299] handling current node
	W0805 22:56:55.983630       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 22:56:55.983742       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0805 22:56:57.577539       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:56:57.577577       1 main.go:299] handling current node
	W0805 22:57:06.979808       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0805 22:57:06.979927       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0805 22:57:07.577393       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:57:07.577428       1 main.go:299] handling current node
	I0805 22:57:17.577385       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:57:17.577432       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19] <==
	I0805 22:52:19.076337       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0805 22:53:03.091258       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54878: use of closed network connection
	I0805 22:53:50.222276       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0805 22:54:03.471417       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0805 22:54:23.108939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.109081       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:23.131692       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.131970       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:23.155005       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.155136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:23.172158       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.172317       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:23.229268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.229410       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0805 22:54:24.156353       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0805 22:54:24.230448       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0805 22:54:24.273760       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0805 22:54:30.969727       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.50.72"}
	E0805 22:54:31.117384       1 watch.go:250] http2: stream closed
	I0805 22:54:47.815648       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0805 22:54:48.847988       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0805 22:54:53.379722       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0805 22:54:53.690182       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.88.147"}
	I0805 22:57:14.298689       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.137.130"}
	E0805 22:57:15.734932       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef] <==
	W0805 22:55:50.853807       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:55:50.853845       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:55:51.548303       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:55:51.548341       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:55:53.911836       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:55:53.911872       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:56:25.943400       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:56:25.943443       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:56:27.065596       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:56:27.065638       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:56:37.572613       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:56:37.572652       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:56:47.693461       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:56:47.693499       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 22:57:14.053277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="33.765944ms"
	I0805 22:57:14.065263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="11.935459ms"
	I0805 22:57:14.065422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="128.434µs"
	I0805 22:57:14.072533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="57.033µs"
	W0805 22:57:14.315736       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:57:14.315871       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:57:16.173273       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:57:16.173315       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 22:57:16.386356       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0805 22:57:16.389807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="4.595µs"
	I0805 22:57:16.396753       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb] <==
	I0805 22:50:27.509970       1 server_linux.go:69] "Using iptables proxy"
	I0805 22:50:27.711102       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0805 22:50:28.030605       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0805 22:50:28.030675       1 server_linux.go:165] "Using iptables Proxier"
	I0805 22:50:28.056078       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0805 22:50:28.056108       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0805 22:50:28.056135       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 22:50:28.056356       1 server.go:872] "Version info" version="v1.30.3"
	I0805 22:50:28.056379       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 22:50:28.058296       1 config.go:192] "Starting service config controller"
	I0805 22:50:28.058323       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 22:50:28.058362       1 config.go:101] "Starting endpoint slice config controller"
	I0805 22:50:28.058372       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 22:50:28.058713       1 config.go:319] "Starting node config controller"
	I0805 22:50:28.058731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 22:50:28.158863       1 shared_informer.go:320] Caches are synced for node config
	I0805 22:50:28.164183       1 shared_informer.go:320] Caches are synced for service config
	I0805 22:50:28.164207       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde] <==
	W0805 22:50:04.740019       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 22:50:04.740034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 22:50:04.740071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 22:50:04.740084       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 22:50:04.740117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:04.740133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:04.743872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 22:50:04.744546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 22:50:04.744721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 22:50:04.744999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 22:50:04.744830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 22:50:04.745097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 22:50:04.744964       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 22:50:04.745160       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 22:50:05.609666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 22:50:05.609814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 22:50:05.644540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:05.644597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:05.649343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 22:50:05.649452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 22:50:05.882909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 22:50:05.882952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 22:50:05.889969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 22:50:05.890014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0805 22:50:06.331288       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 22:57:14 addons-554168 kubelet[1564]: E0805 22:57:14.048151    1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6a1c2b8-213e-4ccb-8848-c9bddc5349b4" containerName="gadget"
	Aug 05 22:57:14 addons-554168 kubelet[1564]: I0805 22:57:14.048188    1564 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a1c2b8-213e-4ccb-8848-c9bddc5349b4" containerName="gadget"
	Aug 05 22:57:14 addons-554168 kubelet[1564]: I0805 22:57:14.048197    1564 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a1c2b8-213e-4ccb-8848-c9bddc5349b4" containerName="gadget"
	Aug 05 22:57:14 addons-554168 kubelet[1564]: I0805 22:57:14.135042    1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9d8f\" (UniqueName: \"kubernetes.io/projected/bf5d27cc-5d30-4c9c-9d1c-74434caa04e9-kube-api-access-f9d8f\") pod \"hello-world-app-6778b5fc9f-lmjj4\" (UID: \"bf5d27cc-5d30-4c9c-9d1c-74434caa04e9\") " pod="default/hello-world-app-6778b5fc9f-lmjj4"
	Aug 05 22:57:15 addons-554168 kubelet[1564]: I0805 22:57:15.444633    1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcthj\" (UniqueName: \"kubernetes.io/projected/fa78f0fa-4656-494a-8b6f-92f40e4c8f8b-kube-api-access-zcthj\") pod \"fa78f0fa-4656-494a-8b6f-92f40e4c8f8b\" (UID: \"fa78f0fa-4656-494a-8b6f-92f40e4c8f8b\") "
	Aug 05 22:57:15 addons-554168 kubelet[1564]: I0805 22:57:15.446606    1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa78f0fa-4656-494a-8b6f-92f40e4c8f8b-kube-api-access-zcthj" (OuterVolumeSpecName: "kube-api-access-zcthj") pod "fa78f0fa-4656-494a-8b6f-92f40e4c8f8b" (UID: "fa78f0fa-4656-494a-8b6f-92f40e4c8f8b"). InnerVolumeSpecName "kube-api-access-zcthj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 22:57:15 addons-554168 kubelet[1564]: I0805 22:57:15.545966    1564 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zcthj\" (UniqueName: \"kubernetes.io/projected/fa78f0fa-4656-494a-8b6f-92f40e4c8f8b-kube-api-access-zcthj\") on node \"addons-554168\" DevicePath \"\""
	Aug 05 22:57:15 addons-554168 kubelet[1564]: I0805 22:57:15.711811    1564 scope.go:117] "RemoveContainer" containerID="3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95"
	Aug 05 22:57:15 addons-554168 kubelet[1564]: I0805 22:57:15.734554    1564 scope.go:117] "RemoveContainer" containerID="3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95"
	Aug 05 22:57:15 addons-554168 kubelet[1564]: E0805 22:57:15.738352    1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95\": container with ID starting with 3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95 not found: ID does not exist" containerID="3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95"
	Aug 05 22:57:15 addons-554168 kubelet[1564]: I0805 22:57:15.738393    1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95"} err="failed to get container status \"3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95\": rpc error: code = NotFound desc = could not find container \"3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95\": container with ID starting with 3e5a7a5c7f4808f41acd0c46380cf3ff38998ad36619f4c9666058403edbbf95 not found: ID does not exist"
	Aug 05 22:57:17 addons-554168 kubelet[1564]: I0805 22:57:17.427326    1564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4051bec-9af3-4cb2-afae-6072a2ad51f3" path="/var/lib/kubelet/pods/c4051bec-9af3-4cb2-afae-6072a2ad51f3/volumes"
	Aug 05 22:57:17 addons-554168 kubelet[1564]: I0805 22:57:17.427720    1564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f23ae3-4314-4a2f-860e-90fdb9747f5f" path="/var/lib/kubelet/pods/c4f23ae3-4314-4a2f-860e-90fdb9747f5f/volumes"
	Aug 05 22:57:17 addons-554168 kubelet[1564]: I0805 22:57:17.428042    1564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa78f0fa-4656-494a-8b6f-92f40e4c8f8b" path="/var/lib/kubelet/pods/fa78f0fa-4656-494a-8b6f-92f40e4c8f8b/volumes"
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.722736    1564 scope.go:117] "RemoveContainer" containerID="03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e"
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.740907    1564 scope.go:117] "RemoveContainer" containerID="03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e"
	Aug 05 22:57:19 addons-554168 kubelet[1564]: E0805 22:57:19.741322    1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e\": container with ID starting with 03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e not found: ID does not exist" containerID="03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e"
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.741367    1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e"} err="failed to get container status \"03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e\": rpc error: code = NotFound desc = could not find container \"03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e\": container with ID starting with 03dd3a51befe8d8b4de62277938b55644a37ceb8bce31560484cb1d5fcdf754e not found: ID does not exist"
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.772740    1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ef33513-6701-4d18-97d8-2adbd5490d2c-webhook-cert\") pod \"1ef33513-6701-4d18-97d8-2adbd5490d2c\" (UID: \"1ef33513-6701-4d18-97d8-2adbd5490d2c\") "
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.772797    1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkq7n\" (UniqueName: \"kubernetes.io/projected/1ef33513-6701-4d18-97d8-2adbd5490d2c-kube-api-access-rkq7n\") pod \"1ef33513-6701-4d18-97d8-2adbd5490d2c\" (UID: \"1ef33513-6701-4d18-97d8-2adbd5490d2c\") "
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.776431    1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef33513-6701-4d18-97d8-2adbd5490d2c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1ef33513-6701-4d18-97d8-2adbd5490d2c" (UID: "1ef33513-6701-4d18-97d8-2adbd5490d2c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.776541    1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef33513-6701-4d18-97d8-2adbd5490d2c-kube-api-access-rkq7n" (OuterVolumeSpecName: "kube-api-access-rkq7n") pod "1ef33513-6701-4d18-97d8-2adbd5490d2c" (UID: "1ef33513-6701-4d18-97d8-2adbd5490d2c"). InnerVolumeSpecName "kube-api-access-rkq7n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.873055    1564 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ef33513-6701-4d18-97d8-2adbd5490d2c-webhook-cert\") on node \"addons-554168\" DevicePath \"\""
	Aug 05 22:57:19 addons-554168 kubelet[1564]: I0805 22:57:19.873096    1564 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rkq7n\" (UniqueName: \"kubernetes.io/projected/1ef33513-6701-4d18-97d8-2adbd5490d2c-kube-api-access-rkq7n\") on node \"addons-554168\" DevicePath \"\""
	Aug 05 22:57:21 addons-554168 kubelet[1564]: I0805 22:57:21.430548    1564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef33513-6701-4d18-97d8-2adbd5490d2c" path="/var/lib/kubelet/pods/1ef33513-6701-4d18-97d8-2adbd5490d2c/volumes"
	
	
	==> storage-provisioner [dba64b3f42ca10b70dc9271763bb155e7685614cb272a5f1577758aab31ea154] <==
	I0805 22:51:09.110258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 22:51:09.122201       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 22:51:09.122357       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 22:51:09.133153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 22:51:09.133641       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6a17fc7-bc49-4853-84f4-93a994633eae", APIVersion:"v1", ResourceVersion:"935", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-554168_2934d922-49fa-44aa-8552-61be334d079d became leader
	I0805 22:51:09.133747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-554168_2934d922-49fa-44aa-8552-61be334d079d!
	I0805 22:51:09.234363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-554168_2934d922-49fa-44aa-8552-61be334d079d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-554168 -n addons-554168
helpers_test.go:261: (dbg) Run:  kubectl --context addons-554168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-6778b5fc9f-lmjj4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-554168 describe pod hello-world-app-6778b5fc9f-lmjj4
helpers_test.go:282: (dbg) kubectl --context addons-554168 describe pod hello-world-app-6778b5fc9f-lmjj4:

                                                
                                                
-- stdout --
	Name:             hello-world-app-6778b5fc9f-lmjj4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-554168/192.168.49.2
	Start Time:       Mon, 05 Aug 2024 22:57:14 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=6778b5fc9f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-6778b5fc9f
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9d8f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9d8f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/hello-world-app-6778b5fc9f-lmjj4 to addons-554168
	  Normal  Pulling    12s   kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (334.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.718042ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-4dgqd" [87a4cfae-8eae-4755-8efe-9e869f5ea69e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004784595s
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (95.696758ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 4m16.21606648s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (91.236953ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 4m20.765010251s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (91.919039ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 4m23.464980769s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (99.040764ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 4m29.758826539s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (112.59126ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 4m35.731022455s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (93.621673ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 4m56.434174634s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (86.993726ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 5m9.114247461s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (90.789569ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 5m51.544546875s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (98.001944ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 6m28.003718104s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (86.57291ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 7m29.713655919s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (89.903453ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 8m25.97402941s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-554168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-554168 top pods -n kube-system: exit status 1 (350.600861ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-prz4h, age: 9m41.724062656s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-554168
helpers_test.go:235: (dbg) docker inspect addons-554168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f",
	        "Created": "2024-08-05T22:49:41.710004288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1566619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-05T22:49:41.847014635Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f/hostname",
	        "HostsPath": "/var/lib/docker/containers/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f/hosts",
	        "LogPath": "/var/lib/docker/containers/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f/00fe2ccfdede16d7d5741bf071045f882ac7c37df04ed0de7d796f00958de58f-json.log",
	        "Name": "/addons-554168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-554168:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-554168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fae22c08e1a29bc4f998345685a1d64bd4fd698c65da604d53ce1513d2c635fd-init/diff:/var/lib/docker/overlay2/86ccb695426d1801c241efb9fd4274cb7838d591a3ef1deb45fd2daef819089e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fae22c08e1a29bc4f998345685a1d64bd4fd698c65da604d53ce1513d2c635fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fae22c08e1a29bc4f998345685a1d64bd4fd698c65da604d53ce1513d2c635fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fae22c08e1a29bc4f998345685a1d64bd4fd698c65da604d53ce1513d2c635fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-554168",
	                "Source": "/var/lib/docker/volumes/addons-554168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-554168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-554168",
	                "name.minikube.sigs.k8s.io": "addons-554168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d2d7ac081a5f115bf9c853d50fa4efab21fbdba26e321a233c2b94a476826576",
	            "SandboxKey": "/var/run/docker/netns/d2d7ac081a5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34637"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34638"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34641"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34639"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34640"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-554168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a91da328e200f47df71e5a86f827f35495060e6334363392667e57e82f61a2c6",
	                    "EndpointID": "756b160513df82d17a803a0bc0c9b7f24bd1cba6d70ec9516860d76ef0d25dbc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-554168",
	                        "00fe2ccfdede"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-554168 -n addons-554168
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-554168 logs -n 25: (1.506147809s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-565200 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | download-docker-565200                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-565200                                                                   | download-docker-565200 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-045657   | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | binary-mirror-045657                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36853                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-045657                                                                     | binary-mirror-045657   | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| addons  | disable dashboard -p                                                                        | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | addons-554168                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | addons-554168                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-554168 --wait=true                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-554168 ip                                                                            | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | -p addons-554168                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-554168 ssh cat                                                                       | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | /opt/local-path-provisioner/pvc-61300692-a5b6-4c41-ab58-cbf29128fef9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:54 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-554168 addons                                                                        | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-554168 addons                                                                        | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | addons-554168                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | -p addons-554168                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | addons-554168                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-554168 ssh curl -s                                                                   | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:55 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-554168 ip                                                                            | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-554168 addons disable                                                                | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-554168 addons                                                                        | addons-554168          | jenkins | v1.33.1 | 05 Aug 24 23:00 UTC | 05 Aug 24 23:00 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:49:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:49:15.926842 1566127 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:49:15.926977 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:15.927140 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:49:15.927157 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:15.927392 1566127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 22:49:15.927912 1566127 out.go:298] Setting JSON to false
	I0805 22:49:15.928801 1566127 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27096,"bootTime":1722871060,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 22:49:15.928879 1566127 start.go:139] virtualization:  
	I0805 22:49:15.931641 1566127 out.go:177] * [addons-554168] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 22:49:15.934606 1566127 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 22:49:15.934655 1566127 notify.go:220] Checking for updates...
	I0805 22:49:15.937045 1566127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:49:15.939618 1566127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 22:49:15.941987 1566127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 22:49:15.944091 1566127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 22:49:15.946728 1566127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 22:49:15.949139 1566127 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:49:15.970171 1566127 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 22:49:15.970293 1566127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:49:16.044322 1566127 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 22:49:16.033803322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:49:16.044451 1566127 docker.go:307] overlay module found
	I0805 22:49:16.046620 1566127 out.go:177] * Using the docker driver based on user configuration
	I0805 22:49:16.048448 1566127 start.go:297] selected driver: docker
	I0805 22:49:16.048472 1566127 start.go:901] validating driver "docker" against <nil>
	I0805 22:49:16.048488 1566127 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 22:49:16.049229 1566127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:49:16.100271 1566127 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 22:49:16.090520508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:49:16.100432 1566127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:49:16.100757 1566127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 22:49:16.102435 1566127 out.go:177] * Using Docker driver with root privileges
	I0805 22:49:16.104207 1566127 cni.go:84] Creating CNI manager for ""
	I0805 22:49:16.104230 1566127 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:49:16.104243 1566127 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 22:49:16.104342 1566127 start.go:340] cluster config:
	{Name:addons-554168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:49:16.106532 1566127 out.go:177] * Starting "addons-554168" primary control-plane node in "addons-554168" cluster
	I0805 22:49:16.108064 1566127 cache.go:121] Beginning downloading kic base image for docker with crio
	I0805 22:49:16.109825 1566127 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0805 22:49:16.111787 1566127 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 22:49:16.111954 1566127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:49:16.111988 1566127 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0805 22:49:16.112000 1566127 cache.go:56] Caching tarball of preloaded images
	I0805 22:49:16.112066 1566127 preload.go:172] Found /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0805 22:49:16.112081 1566127 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 22:49:16.112416 1566127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/config.json ...
	I0805 22:49:16.112442 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/config.json: {Name:mkaaf90554ae570281dc409936a60acfcebfaea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:16.128238 1566127 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 22:49:16.128366 1566127 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 22:49:16.128390 1566127 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0805 22:49:16.128398 1566127 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0805 22:49:16.128406 1566127 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0805 22:49:16.128415 1566127 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0805 22:49:33.150301 1566127 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0805 22:49:33.150347 1566127 cache.go:194] Successfully downloaded all kic artifacts
	I0805 22:49:33.150380 1566127 start.go:360] acquireMachinesLock for addons-554168: {Name:mk99fd9ec2c5ec7bf0bc1e27cb3a59cdbefafe59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:49:33.151094 1566127 start.go:364] duration metric: took 686.655µs to acquireMachinesLock for "addons-554168"
	I0805 22:49:33.151134 1566127 start.go:93] Provisioning new machine with config: &{Name:addons-554168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 22:49:33.151229 1566127 start.go:125] createHost starting for "" (driver="docker")
	I0805 22:49:33.153580 1566127 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0805 22:49:33.153832 1566127 start.go:159] libmachine.API.Create for "addons-554168" (driver="docker")
	I0805 22:49:33.153868 1566127 client.go:168] LocalClient.Create starting
	I0805 22:49:33.154005 1566127 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem
	I0805 22:49:34.102408 1566127 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem
	I0805 22:49:35.223392 1566127 cli_runner.go:164] Run: docker network inspect addons-554168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0805 22:49:35.239271 1566127 cli_runner.go:211] docker network inspect addons-554168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0805 22:49:35.239369 1566127 network_create.go:284] running [docker network inspect addons-554168] to gather additional debugging logs...
	I0805 22:49:35.239393 1566127 cli_runner.go:164] Run: docker network inspect addons-554168
	W0805 22:49:35.255244 1566127 cli_runner.go:211] docker network inspect addons-554168 returned with exit code 1
	I0805 22:49:35.255294 1566127 network_create.go:287] error running [docker network inspect addons-554168]: docker network inspect addons-554168: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-554168 not found
	I0805 22:49:35.255324 1566127 network_create.go:289] output of [docker network inspect addons-554168]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-554168 not found
	
	** /stderr **
	I0805 22:49:35.255432 1566127 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0805 22:49:35.270195 1566127 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000ecb0}
	I0805 22:49:35.270241 1566127 network_create.go:124] attempt to create docker network addons-554168 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0805 22:49:35.270346 1566127 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-554168 addons-554168
	I0805 22:49:35.345733 1566127 network_create.go:108] docker network addons-554168 192.168.49.0/24 created
	I0805 22:49:35.345768 1566127 kic.go:121] calculated static IP "192.168.49.2" for the "addons-554168" container
	I0805 22:49:35.345843 1566127 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0805 22:49:35.366850 1566127 cli_runner.go:164] Run: docker volume create addons-554168 --label name.minikube.sigs.k8s.io=addons-554168 --label created_by.minikube.sigs.k8s.io=true
	I0805 22:49:35.383637 1566127 oci.go:103] Successfully created a docker volume addons-554168
	I0805 22:49:35.383730 1566127 cli_runner.go:164] Run: docker run --rm --name addons-554168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-554168 --entrypoint /usr/bin/test -v addons-554168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0805 22:49:37.365232 1566127 cli_runner.go:217] Completed: docker run --rm --name addons-554168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-554168 --entrypoint /usr/bin/test -v addons-554168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (1.981457579s)
	I0805 22:49:37.365265 1566127 oci.go:107] Successfully prepared a docker volume addons-554168
	I0805 22:49:37.365288 1566127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:49:37.365308 1566127 kic.go:194] Starting extracting preloaded images to volume ...
	I0805 22:49:37.365397 1566127 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-554168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0805 22:49:41.639419 1566127 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-554168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.273974438s)
	I0805 22:49:41.639455 1566127 kic.go:203] duration metric: took 4.274143633s to extract preloaded images to volume ...
	W0805 22:49:41.639598 1566127 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0805 22:49:41.639712 1566127 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0805 22:49:41.695473 1566127 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-554168 --name addons-554168 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-554168 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-554168 --network addons-554168 --ip 192.168.49.2 --volume addons-554168:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0805 22:49:42.020286 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Running}}
	I0805 22:49:42.051796 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:49:42.075264 1566127 cli_runner.go:164] Run: docker exec addons-554168 stat /var/lib/dpkg/alternatives/iptables
	I0805 22:49:42.163739 1566127 oci.go:144] the created container "addons-554168" has a running status.
	I0805 22:49:42.163775 1566127 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa...
	I0805 22:49:42.765998 1566127 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0805 22:49:42.797662 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:49:42.820714 1566127 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0805 22:49:42.820741 1566127 kic_runner.go:114] Args: [docker exec --privileged addons-554168 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0805 22:49:42.900101 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:49:42.924858 1566127 machine.go:94] provisionDockerMachine start ...
	I0805 22:49:42.924955 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:42.955057 1566127 main.go:141] libmachine: Using SSH client type: native
	I0805 22:49:42.955326 1566127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34637 <nil> <nil>}
	I0805 22:49:42.955342 1566127 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 22:49:43.101295 1566127 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-554168
	
	I0805 22:49:43.101322 1566127 ubuntu.go:169] provisioning hostname "addons-554168"
	I0805 22:49:43.101390 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:43.123503 1566127 main.go:141] libmachine: Using SSH client type: native
	I0805 22:49:43.123760 1566127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34637 <nil> <nil>}
	I0805 22:49:43.123772 1566127 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-554168 && echo "addons-554168" | sudo tee /etc/hostname
	I0805 22:49:43.277247 1566127 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-554168
	
	I0805 22:49:43.277347 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:43.298250 1566127 main.go:141] libmachine: Using SSH client type: native
	I0805 22:49:43.298510 1566127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34637 <nil> <nil>}
	I0805 22:49:43.298534 1566127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-554168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-554168/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-554168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 22:49:43.432711 1566127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 22:49:43.432783 1566127 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19373-1559727/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-1559727/.minikube}
	I0805 22:49:43.432822 1566127 ubuntu.go:177] setting up certificates
	I0805 22:49:43.432832 1566127 provision.go:84] configureAuth start
	I0805 22:49:43.432903 1566127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-554168
	I0805 22:49:43.449429 1566127 provision.go:143] copyHostCerts
	I0805 22:49:43.449516 1566127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.pem (1078 bytes)
	I0805 22:49:43.449656 1566127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/cert.pem (1123 bytes)
	I0805 22:49:43.449731 1566127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/key.pem (1679 bytes)
	I0805 22:49:43.449797 1566127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem org=jenkins.addons-554168 san=[127.0.0.1 192.168.49.2 addons-554168 localhost minikube]
	I0805 22:49:43.917729 1566127 provision.go:177] copyRemoteCerts
	I0805 22:49:43.917811 1566127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 22:49:43.917860 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:43.934205 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.030068 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 22:49:44.055161 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 22:49:44.079624 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 22:49:44.105929 1566127 provision.go:87] duration metric: took 673.081336ms to configureAuth
	I0805 22:49:44.105956 1566127 ubuntu.go:193] setting minikube options for container-runtime
	I0805 22:49:44.106155 1566127 config.go:182] Loaded profile config "addons-554168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:49:44.106274 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.122399 1566127 main.go:141] libmachine: Using SSH client type: native
	I0805 22:49:44.122647 1566127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34637 <nil> <nil>}
	I0805 22:49:44.122671 1566127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 22:49:44.353701 1566127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 22:49:44.353727 1566127 machine.go:97] duration metric: took 1.428845755s to provisionDockerMachine
	I0805 22:49:44.353737 1566127 client.go:171] duration metric: took 11.199863013s to LocalClient.Create
	I0805 22:49:44.353751 1566127 start.go:167] duration metric: took 11.199919398s to libmachine.API.Create "addons-554168"
	I0805 22:49:44.353758 1566127 start.go:293] postStartSetup for "addons-554168" (driver="docker")
	I0805 22:49:44.353771 1566127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 22:49:44.353842 1566127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 22:49:44.353930 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.371115 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.465754 1566127 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 22:49:44.468910 1566127 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0805 22:49:44.468945 1566127 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0805 22:49:44.468955 1566127 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0805 22:49:44.468962 1566127 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0805 22:49:44.468973 1566127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-1559727/.minikube/addons for local assets ...
	I0805 22:49:44.469057 1566127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-1559727/.minikube/files for local assets ...
	I0805 22:49:44.469079 1566127 start.go:296] duration metric: took 115.315555ms for postStartSetup
	I0805 22:49:44.469391 1566127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-554168
	I0805 22:49:44.485499 1566127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/config.json ...
	I0805 22:49:44.485813 1566127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 22:49:44.485869 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.504375 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.598270 1566127 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0805 22:49:44.602832 1566127 start.go:128] duration metric: took 11.451584134s to createHost
	I0805 22:49:44.602857 1566127 start.go:83] releasing machines lock for "addons-554168", held for 11.451744124s
	I0805 22:49:44.602929 1566127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-554168
	I0805 22:49:44.618721 1566127 ssh_runner.go:195] Run: cat /version.json
	I0805 22:49:44.618743 1566127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 22:49:44.618771 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.618786 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:49:44.637055 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.637853 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:49:44.732978 1566127 ssh_runner.go:195] Run: systemctl --version
	I0805 22:49:44.871905 1566127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 22:49:45.037706 1566127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 22:49:45.053244 1566127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 22:49:45.081309 1566127 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0805 22:49:45.081469 1566127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 22:49:45.133281 1566127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0805 22:49:45.133370 1566127 start.go:495] detecting cgroup driver to use...
	I0805 22:49:45.133443 1566127 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0805 22:49:45.133532 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 22:49:45.158587 1566127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 22:49:45.181794 1566127 docker.go:217] disabling cri-docker service (if available) ...
	I0805 22:49:45.182109 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 22:49:45.200666 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 22:49:45.220648 1566127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 22:49:45.338236 1566127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 22:49:45.445089 1566127 docker.go:233] disabling docker service ...
	I0805 22:49:45.445155 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 22:49:45.467864 1566127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 22:49:45.481000 1566127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 22:49:45.575100 1566127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 22:49:45.675035 1566127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 22:49:45.687255 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 22:49:45.703002 1566127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 22:49:45.703110 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.713010 1566127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 22:49:45.713089 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.722946 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.732372 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.742601 1566127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 22:49:45.751669 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.761377 1566127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.776597 1566127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:49:45.786342 1566127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 22:49:45.794852 1566127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 22:49:45.803213 1566127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:49:45.896578 1566127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 22:49:46.007689 1566127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 22:49:46.007811 1566127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 22:49:46.012059 1566127 start.go:563] Will wait 60s for crictl version
	I0805 22:49:46.012138 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:49:46.015788 1566127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 22:49:46.052400 1566127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0805 22:49:46.052526 1566127 ssh_runner.go:195] Run: crio --version
	I0805 22:49:46.090545 1566127 ssh_runner.go:195] Run: crio --version
	I0805 22:49:46.130981 1566127 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0805 22:49:46.132893 1566127 cli_runner.go:164] Run: docker network inspect addons-554168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0805 22:49:46.148236 1566127 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0805 22:49:46.151999 1566127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 22:49:46.162630 1566127 kubeadm.go:883] updating cluster {Name:addons-554168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 22:49:46.162753 1566127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:49:46.162819 1566127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 22:49:46.240769 1566127 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 22:49:46.240794 1566127 crio.go:433] Images already preloaded, skipping extraction
	I0805 22:49:46.240849 1566127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 22:49:46.276671 1566127 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 22:49:46.276696 1566127 cache_images.go:84] Images are preloaded, skipping loading
	I0805 22:49:46.276704 1566127 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0805 22:49:46.276805 1566127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-554168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 22:49:46.276890 1566127 ssh_runner.go:195] Run: crio config
	I0805 22:49:46.330094 1566127 cni.go:84] Creating CNI manager for ""
	I0805 22:49:46.330122 1566127 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:49:46.330131 1566127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 22:49:46.330192 1566127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-554168 NodeName:addons-554168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 22:49:46.330358 1566127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-554168"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 22:49:46.330441 1566127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 22:49:46.338992 1566127 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 22:49:46.339112 1566127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 22:49:46.347447 1566127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0805 22:49:46.365334 1566127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 22:49:46.383399 1566127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0805 22:49:46.401499 1566127 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0805 22:49:46.404841 1566127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 22:49:46.415150 1566127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:49:46.505736 1566127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 22:49:46.520069 1566127 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168 for IP: 192.168.49.2
	I0805 22:49:46.520135 1566127 certs.go:194] generating shared ca certs ...
	I0805 22:49:46.520165 1566127 certs.go:226] acquiring lock for ca certs: {Name:mk45a3b9d27e38f3abe9128d73d1ec1f570fe6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:46.520949 1566127 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key
	I0805 22:49:47.094710 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt ...
	I0805 22:49:47.094743 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt: {Name:mk36f596ece4fe743782bfc12058efc8b4800ec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:47.095526 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key ...
	I0805 22:49:47.095566 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key: {Name:mke4ba11bb197a5d9b523ed8404f8129f4886c0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:47.096188 1566127 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key
	I0805 22:49:47.853668 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.crt ...
	I0805 22:49:47.853703 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.crt: {Name:mkdd749e1e56ff4f622e209e7d20e736bad13104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:47.853894 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key ...
	I0805 22:49:47.853912 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key: {Name:mk082611ab0d6b76988b12bd06f7c6568264a404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:47.853989 1566127 certs.go:256] generating profile certs ...
	I0805 22:49:47.854052 1566127 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.key
	I0805 22:49:47.854067 1566127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt with IP's: []
	I0805 22:49:48.645261 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt ...
	I0805 22:49:48.645295 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: {Name:mk84ec6671d5f83acdfadf98752918d45c66853f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.646100 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.key ...
	I0805 22:49:48.646121 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.key: {Name:mkfdac9cd866d8539b411bf0d4357e5aae2e3ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.646271 1566127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key.613b444e
	I0805 22:49:48.646299 1566127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt.613b444e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0805 22:49:48.903510 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt.613b444e ...
	I0805 22:49:48.903548 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt.613b444e: {Name:mkc0414456cf3231fb046ce9605c72cafb4d26dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.904362 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key.613b444e ...
	I0805 22:49:48.904387 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key.613b444e: {Name:mkc96a745712a90db4ab834b3fd463b4bacab95e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.904526 1566127 certs.go:381] copying /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt.613b444e -> /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt
	I0805 22:49:48.904639 1566127 certs.go:385] copying /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key.613b444e -> /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key
	I0805 22:49:48.904697 1566127 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.key
	I0805 22:49:48.904720 1566127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.crt with IP's: []
	I0805 22:49:49.095118 1566127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.crt ...
	I0805 22:49:49.095148 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.crt: {Name:mk203d37e55d86765d49897e2b602446e5239683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:49.095987 1566127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.key ...
	I0805 22:49:49.096012 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.key: {Name:mkbb1b5267aa0f1a8fa6d0eda1ba781d0ceb8dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:49.096220 1566127 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 22:49:49.096270 1566127 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem (1078 bytes)
	I0805 22:49:49.096301 1566127 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem (1123 bytes)
	I0805 22:49:49.096330 1566127 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem (1679 bytes)
	I0805 22:49:49.096957 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 22:49:49.122100 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 22:49:49.148496 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 22:49:49.175316 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 22:49:49.199992 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 22:49:49.223896 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 22:49:49.248014 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 22:49:49.272272 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 22:49:49.296375 1566127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 22:49:49.321195 1566127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 22:49:49.339067 1566127 ssh_runner.go:195] Run: openssl version
	I0805 22:49:49.344497 1566127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 22:49:49.354337 1566127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:49:49.357790 1566127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:49 /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:49:49.357857 1566127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:49:49.364738 1566127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 22:49:49.374046 1566127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 22:49:49.377275 1566127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 22:49:49.377326 1566127 kubeadm.go:392] StartCluster: {Name:addons-554168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-554168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:49:49.377421 1566127 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 22:49:49.377487 1566127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 22:49:49.417488 1566127 cri.go:89] found id: ""
	I0805 22:49:49.417609 1566127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 22:49:49.426568 1566127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 22:49:49.435439 1566127 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0805 22:49:49.435548 1566127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 22:49:49.444663 1566127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 22:49:49.444685 1566127 kubeadm.go:157] found existing configuration files:
	
	I0805 22:49:49.444741 1566127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 22:49:49.453554 1566127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 22:49:49.453647 1566127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 22:49:49.462230 1566127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 22:49:49.471038 1566127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 22:49:49.471109 1566127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 22:49:49.479268 1566127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 22:49:49.488024 1566127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 22:49:49.488119 1566127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 22:49:49.496498 1566127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 22:49:49.505603 1566127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 22:49:49.505701 1566127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 22:49:49.513934 1566127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0805 22:49:49.621965 1566127 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-aws\n", err: exit status 1
	I0805 22:49:49.694105 1566127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 22:50:08.117301 1566127 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 22:50:08.117358 1566127 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 22:50:08.117443 1566127 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0805 22:50:08.117496 1566127 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-aws
	I0805 22:50:08.117530 1566127 kubeadm.go:310] OS: Linux
	I0805 22:50:08.117577 1566127 kubeadm.go:310] CGROUPS_CPU: enabled
	I0805 22:50:08.117624 1566127 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0805 22:50:08.117670 1566127 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0805 22:50:08.117722 1566127 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0805 22:50:08.117769 1566127 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0805 22:50:08.117816 1566127 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0805 22:50:08.117859 1566127 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0805 22:50:08.117906 1566127 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0805 22:50:08.117953 1566127 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0805 22:50:08.118022 1566127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 22:50:08.118113 1566127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 22:50:08.118202 1566127 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 22:50:08.118263 1566127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 22:50:08.120497 1566127 out.go:204]   - Generating certificates and keys ...
	I0805 22:50:08.120613 1566127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 22:50:08.120685 1566127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 22:50:08.120754 1566127 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 22:50:08.120814 1566127 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 22:50:08.120876 1566127 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 22:50:08.120931 1566127 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 22:50:08.120989 1566127 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 22:50:08.121112 1566127 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-554168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0805 22:50:08.121168 1566127 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 22:50:08.121286 1566127 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-554168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0805 22:50:08.121353 1566127 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 22:50:08.121418 1566127 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 22:50:08.121464 1566127 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 22:50:08.121521 1566127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 22:50:08.121576 1566127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 22:50:08.121636 1566127 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 22:50:08.121694 1566127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 22:50:08.121759 1566127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 22:50:08.121816 1566127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 22:50:08.121898 1566127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 22:50:08.121965 1566127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 22:50:08.123692 1566127 out.go:204]   - Booting up control plane ...
	I0805 22:50:08.123808 1566127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 22:50:08.123895 1566127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 22:50:08.123989 1566127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 22:50:08.124118 1566127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 22:50:08.124209 1566127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 22:50:08.124253 1566127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 22:50:08.124406 1566127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 22:50:08.124487 1566127 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 22:50:08.124573 1566127 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502048523s
	I0805 22:50:08.124660 1566127 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 22:50:08.124731 1566127 kubeadm.go:310] [api-check] The API server is healthy after 7.002176584s
	I0805 22:50:08.124848 1566127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 22:50:08.124976 1566127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 22:50:08.125037 1566127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 22:50:08.125235 1566127 kubeadm.go:310] [mark-control-plane] Marking the node addons-554168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 22:50:08.125297 1566127 kubeadm.go:310] [bootstrap-token] Using token: ptxymf.hwassnejjeyita55
	I0805 22:50:08.127020 1566127 out.go:204]   - Configuring RBAC rules ...
	I0805 22:50:08.127124 1566127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 22:50:08.127207 1566127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 22:50:08.127341 1566127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 22:50:08.127482 1566127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 22:50:08.127593 1566127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 22:50:08.127675 1566127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 22:50:08.127787 1566127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 22:50:08.127828 1566127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 22:50:08.127872 1566127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 22:50:08.127877 1566127 kubeadm.go:310] 
	I0805 22:50:08.127935 1566127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 22:50:08.127940 1566127 kubeadm.go:310] 
	I0805 22:50:08.128014 1566127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 22:50:08.128018 1566127 kubeadm.go:310] 
	I0805 22:50:08.128042 1566127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 22:50:08.128098 1566127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 22:50:08.128168 1566127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 22:50:08.128173 1566127 kubeadm.go:310] 
	I0805 22:50:08.128225 1566127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 22:50:08.128230 1566127 kubeadm.go:310] 
	I0805 22:50:08.128276 1566127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 22:50:08.128280 1566127 kubeadm.go:310] 
	I0805 22:50:08.128330 1566127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 22:50:08.128402 1566127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 22:50:08.128469 1566127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 22:50:08.128473 1566127 kubeadm.go:310] 
	I0805 22:50:08.128588 1566127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 22:50:08.128728 1566127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 22:50:08.128746 1566127 kubeadm.go:310] 
	I0805 22:50:08.128835 1566127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ptxymf.hwassnejjeyita55 \
	I0805 22:50:08.128944 1566127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4344817edd8bd0039bbc7d4d6af60e654808fcdca6a599af4a5badecee199b0 \
	I0805 22:50:08.128968 1566127 kubeadm.go:310] 	--control-plane 
	I0805 22:50:08.128973 1566127 kubeadm.go:310] 
	I0805 22:50:08.129079 1566127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 22:50:08.129096 1566127 kubeadm.go:310] 
	I0805 22:50:08.129191 1566127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ptxymf.hwassnejjeyita55 \
	I0805 22:50:08.129335 1566127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4344817edd8bd0039bbc7d4d6af60e654808fcdca6a599af4a5badecee199b0 
	I0805 22:50:08.129358 1566127 cni.go:84] Creating CNI manager for ""
	I0805 22:50:08.129367 1566127 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:50:08.131320 1566127 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 22:50:08.132999 1566127 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 22:50:08.137144 1566127 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 22:50:08.137165 1566127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 22:50:08.156277 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 22:50:08.416052 1566127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 22:50:08.416150 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:08.416186 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-554168 minikube.k8s.io/updated_at=2024_08_05T22_50_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=addons-554168 minikube.k8s.io/primary=true
	I0805 22:50:08.594346 1566127 ops.go:34] apiserver oom_adj: -16
	I0805 22:50:08.594442 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:09.095302 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:09.594574 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:10.095304 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:10.594599 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:11.095096 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:11.594606 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:12.094695 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:12.595392 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:13.094615 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:13.595019 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:14.095010 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:14.594552 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:15.095338 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:15.595143 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:16.095041 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:16.595512 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:17.095365 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:17.595254 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:18.095323 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:18.594553 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:19.094609 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:19.595457 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:20.095640 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:20.595556 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:21.094963 1566127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:21.190584 1566127 kubeadm.go:1113] duration metric: took 12.774505312s to wait for elevateKubeSystemPrivileges
	I0805 22:50:21.190614 1566127 kubeadm.go:394] duration metric: took 31.813292438s to StartCluster
	I0805 22:50:21.190632 1566127 settings.go:142] acquiring lock: {Name:mk3a1710a3f4cbefc7bc92fbb01d7e9e884b2ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:21.190758 1566127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 22:50:21.191144 1566127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/kubeconfig: {Name:mk27f7706a4f201bd85010407a0f2ea984ce81b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:21.191338 1566127 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 22:50:21.191496 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 22:50:21.191776 1566127 config.go:182] Loaded profile config "addons-554168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:21.191784 1566127 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0805 22:50:21.191926 1566127 addons.go:69] Setting yakd=true in profile "addons-554168"
	I0805 22:50:21.191962 1566127 addons.go:234] Setting addon yakd=true in "addons-554168"
	I0805 22:50:21.191958 1566127 addons.go:69] Setting inspektor-gadget=true in profile "addons-554168"
	I0805 22:50:21.191990 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.191996 1566127 addons.go:234] Setting addon inspektor-gadget=true in "addons-554168"
	I0805 22:50:21.192019 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.192085 1566127 addons.go:69] Setting metrics-server=true in profile "addons-554168"
	I0805 22:50:21.192098 1566127 addons.go:234] Setting addon metrics-server=true in "addons-554168"
	I0805 22:50:21.192114 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.192460 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.192510 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.193330 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.194925 1566127 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-554168"
	I0805 22:50:21.195193 1566127 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-554168"
	I0805 22:50:21.195235 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.195057 1566127 addons.go:69] Setting registry=true in profile "addons-554168"
	I0805 22:50:21.196213 1566127 addons.go:234] Setting addon registry=true in "addons-554168"
	I0805 22:50:21.196251 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.196794 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.197796 1566127 addons.go:69] Setting cloud-spanner=true in profile "addons-554168"
	I0805 22:50:21.197831 1566127 addons.go:234] Setting addon cloud-spanner=true in "addons-554168"
	I0805 22:50:21.197870 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.198332 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.195069 1566127 addons.go:69] Setting storage-provisioner=true in profile "addons-554168"
	I0805 22:50:21.198520 1566127 addons.go:234] Setting addon storage-provisioner=true in "addons-554168"
	I0805 22:50:21.198548 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.199054 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.208437 1566127 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-554168"
	I0805 22:50:21.208541 1566127 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-554168"
	I0805 22:50:21.208750 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.209300 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.195080 1566127 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-554168"
	I0805 22:50:21.215022 1566127 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-554168"
	I0805 22:50:21.215329 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.195087 1566127 addons.go:69] Setting volcano=true in profile "addons-554168"
	I0805 22:50:21.228520 1566127 addons.go:234] Setting addon volcano=true in "addons-554168"
	I0805 22:50:21.228626 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.229089 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.229569 1566127 addons.go:69] Setting default-storageclass=true in profile "addons-554168"
	I0805 22:50:21.229630 1566127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-554168"
	I0805 22:50:21.229911 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.195095 1566127 addons.go:69] Setting volumesnapshots=true in profile "addons-554168"
	I0805 22:50:21.241627 1566127 addons.go:234] Setting addon volumesnapshots=true in "addons-554168"
	I0805 22:50:21.241686 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.242168 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.245065 1566127 addons.go:69] Setting gcp-auth=true in profile "addons-554168"
	I0805 22:50:21.245223 1566127 mustload.go:65] Loading cluster: addons-554168
	I0805 22:50:21.245419 1566127 config.go:182] Loaded profile config "addons-554168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:21.245650 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.245699 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.245226 1566127 out.go:177] * Verifying Kubernetes components...
	I0805 22:50:21.289253 1566127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:50:21.289661 1566127 addons.go:69] Setting ingress=true in profile "addons-554168"
	I0805 22:50:21.289689 1566127 addons.go:234] Setting addon ingress=true in "addons-554168"
	I0805 22:50:21.289729 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.290198 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.312142 1566127 addons.go:69] Setting ingress-dns=true in profile "addons-554168"
	I0805 22:50:21.312193 1566127 addons.go:234] Setting addon ingress-dns=true in "addons-554168"
	I0805 22:50:21.312254 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.312824 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.335585 1566127 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0805 22:50:21.339217 1566127 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 22:50:21.339295 1566127 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 22:50:21.339409 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.362182 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0805 22:50:21.364091 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0805 22:50:21.369189 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0805 22:50:21.371098 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0805 22:50:21.373182 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0805 22:50:21.380391 1566127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 22:50:21.386912 1566127 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0805 22:50:21.388028 1566127 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0805 22:50:21.390973 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0805 22:50:21.395263 1566127 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0805 22:50:21.395865 1566127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 22:50:21.395880 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 22:50:21.395958 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.388833 1566127 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0805 22:50:21.411521 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0805 22:50:21.411644 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.412270 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0805 22:50:21.412286 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0805 22:50:21.412342 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.414701 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0805 22:50:21.414723 1566127 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0805 22:50:21.414798 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.388844 1566127 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0805 22:50:21.420773 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0805 22:50:21.424840 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0805 22:50:21.424864 1566127 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0805 22:50:21.424926 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	W0805 22:50:21.433727 1566127 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0805 22:50:21.435589 1566127 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-554168"
	I0805 22:50:21.435644 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.436133 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.444175 1566127 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0805 22:50:21.457695 1566127 addons.go:234] Setting addon default-storageclass=true in "addons-554168"
	I0805 22:50:21.457780 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.459498 1566127 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 22:50:21.459514 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0805 22:50:21.459567 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.484126 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:21.514559 1566127 out.go:177]   - Using image docker.io/registry:2.8.3
	I0805 22:50:21.515251 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0805 22:50:21.519235 1566127 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0805 22:50:21.519348 1566127 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0805 22:50:21.519377 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0805 22:50:21.519482 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.524116 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0805 22:50:21.524157 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0805 22:50:21.524251 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.536663 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:21.552618 1566127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0805 22:50:21.565162 1566127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:21.580931 1566127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:21.595724 1566127 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0805 22:50:21.596202 1566127 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 22:50:21.596239 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0805 22:50:21.596408 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.597073 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.597603 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.598629 1566127 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 22:50:21.598645 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0805 22:50:21.598737 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.610233 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.614414 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.677106 1566127 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0805 22:50:21.681031 1566127 out.go:177]   - Using image docker.io/busybox:stable
	I0805 22:50:21.682820 1566127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 22:50:21.682840 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0805 22:50:21.682906 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.695629 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.696269 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.708793 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.732134 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.741913 1566127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 22:50:21.742189 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 22:50:21.746451 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.758619 1566127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 22:50:21.758640 1566127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 22:50:21.758718 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:21.776920 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.782016 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.810654 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:21.812724 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	W0805 22:50:21.814541 1566127 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0805 22:50:21.814583 1566127 retry.go:31] will retry after 291.014962ms: ssh: handshake failed: EOF
	I0805 22:50:22.039500 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0805 22:50:22.039531 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0805 22:50:22.054400 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0805 22:50:22.075438 1566127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 22:50:22.075463 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0805 22:50:22.093227 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 22:50:22.194889 1566127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 22:50:22.194924 1566127 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 22:50:22.195633 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 22:50:22.199911 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0805 22:50:22.199936 1566127 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0805 22:50:22.210873 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0805 22:50:22.210899 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0805 22:50:22.241798 1566127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0805 22:50:22.241825 1566127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0805 22:50:22.247232 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 22:50:22.262200 1566127 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0805 22:50:22.262231 1566127 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0805 22:50:22.316948 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 22:50:22.340855 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 22:50:22.341190 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0805 22:50:22.341210 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0805 22:50:22.409667 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0805 22:50:22.409692 1566127 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0805 22:50:22.444481 1566127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 22:50:22.444504 1566127 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 22:50:22.451509 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0805 22:50:22.451574 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0805 22:50:22.491903 1566127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0805 22:50:22.491971 1566127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0805 22:50:22.537981 1566127 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0805 22:50:22.538046 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0805 22:50:22.562533 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0805 22:50:22.562607 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0805 22:50:22.590803 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0805 22:50:22.590869 1566127 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0805 22:50:22.614727 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 22:50:22.646125 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0805 22:50:22.646196 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0805 22:50:22.693979 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 22:50:22.716703 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0805 22:50:22.716778 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0805 22:50:22.719509 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0805 22:50:22.730902 1566127 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0805 22:50:22.730976 1566127 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0805 22:50:22.782277 1566127 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0805 22:50:22.782359 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0805 22:50:22.872097 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0805 22:50:22.872168 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0805 22:50:22.887596 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0805 22:50:22.887670 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0805 22:50:22.983222 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0805 22:50:22.983299 1566127 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0805 22:50:23.005997 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0805 22:50:23.064729 1566127 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0805 22:50:23.064807 1566127 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0805 22:50:23.106266 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0805 22:50:23.106346 1566127 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0805 22:50:23.169662 1566127 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:23.169734 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0805 22:50:23.250454 1566127 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 22:50:23.250523 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0805 22:50:23.272206 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:23.275254 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0805 22:50:23.275323 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0805 22:50:23.334097 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 22:50:23.383915 1566127 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.641699225s)
	I0805 22:50:23.383991 1566127 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0805 22:50:23.384599 1566127 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.642662242s)
	I0805 22:50:23.386145 1566127 node_ready.go:35] waiting up to 6m0s for node "addons-554168" to be "Ready" ...
	I0805 22:50:23.396916 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0805 22:50:23.396986 1566127 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0805 22:50:23.513209 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0805 22:50:23.513286 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0805 22:50:23.644821 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0805 22:50:23.644845 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0805 22:50:23.809191 1566127 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 22:50:23.809228 1566127 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0805 22:50:23.956336 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 22:50:25.014979 1566127 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-554168" context rescaled to 1 replicas
	I0805 22:50:25.635846 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:26.053425 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.998987924s)
	I0805 22:50:27.258038 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.164775159s)
	I0805 22:50:27.380811 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.185147006s)
	I0805 22:50:27.380913 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.133651614s)
	I0805 22:50:27.893166 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:28.363248 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.046247915s)
	I0805 22:50:28.363292 1566127 addons.go:475] Verifying addon ingress=true in "addons-554168"
	I0805 22:50:28.363508 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.022622903s)
	I0805 22:50:28.363674 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.748866189s)
	I0805 22:50:28.363693 1566127 addons.go:475] Verifying addon metrics-server=true in "addons-554168"
	I0805 22:50:28.363734 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.669676782s)
	I0805 22:50:28.363792 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.644201033s)
	I0805 22:50:28.363823 1566127 addons.go:475] Verifying addon registry=true in "addons-554168"
	I0805 22:50:28.363953 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.357869561s)
	I0805 22:50:28.366042 1566127 out.go:177] * Verifying ingress addon...
	I0805 22:50:28.367314 1566127 out.go:177] * Verifying registry addon...
	I0805 22:50:28.367335 1566127 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-554168 service yakd-dashboard -n yakd-dashboard
	
	I0805 22:50:28.369416 1566127 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0805 22:50:28.371764 1566127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0805 22:50:28.388898 1566127 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0805 22:50:28.388986 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:28.391862 1566127 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0805 22:50:28.391884 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:28.594584 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.322288188s)
	W0805 22:50:28.594628 1566127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 22:50:28.594748 1566127 retry.go:31] will retry after 152.624023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 22:50:28.594867 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.260592354s)
	I0805 22:50:28.748318 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:29.018939 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:29.020127 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:29.107333 1566127 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0805 22:50:29.107516 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:29.139627 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:29.161409 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.205020281s)
	I0805 22:50:29.161445 1566127 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-554168"
	I0805 22:50:29.164477 1566127 out.go:177] * Verifying csi-hostpath-driver addon...
	I0805 22:50:29.167130 1566127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0805 22:50:29.336778 1566127 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0805 22:50:29.336807 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:29.360666 1566127 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0805 22:50:29.442911 1566127 addons.go:234] Setting addon gcp-auth=true in "addons-554168"
	I0805 22:50:29.443041 1566127 host.go:66] Checking if "addons-554168" exists ...
	I0805 22:50:29.444062 1566127 cli_runner.go:164] Run: docker container inspect addons-554168 --format={{.State.Status}}
	I0805 22:50:29.464186 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:29.474753 1566127 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0805 22:50:29.474811 1566127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-554168
	I0805 22:50:29.502042 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:29.503842 1566127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34637 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/addons-554168/id_rsa Username:docker}
	I0805 22:50:29.710705 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:29.875401 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:29.879226 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:30.174119 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:30.383063 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:30.383677 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:30.389943 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:30.671054 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:30.874688 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:30.878084 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:31.172252 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:31.374415 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:31.378958 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:31.672004 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:31.887310 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:31.888318 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:32.172791 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:32.242582 1566127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.494215466s)
	I0805 22:50:32.242716 1566127 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.767940214s)
	I0805 22:50:32.245469 1566127 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:32.247230 1566127 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0805 22:50:32.249116 1566127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0805 22:50:32.249183 1566127 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0805 22:50:32.281672 1566127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0805 22:50:32.281748 1566127 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0805 22:50:32.310842 1566127 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 22:50:32.310862 1566127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0805 22:50:32.331859 1566127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 22:50:32.374451 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:32.378452 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:32.394542 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:32.692487 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:32.890801 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:32.894885 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:32.949924 1566127 addons.go:475] Verifying addon gcp-auth=true in "addons-554168"
	I0805 22:50:32.951822 1566127 out.go:177] * Verifying gcp-auth addon...
	I0805 22:50:32.954667 1566127 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0805 22:50:32.962837 1566127 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0805 22:50:32.962867 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:33.172493 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:33.374543 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:33.376598 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:33.458550 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:33.673729 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:33.873449 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:33.878204 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:33.958972 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:34.173912 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:34.374088 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:34.376350 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:34.458416 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:34.673677 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:34.875179 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:34.876095 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:34.890608 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:34.958249 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:35.172057 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:35.374278 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:35.377398 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:35.458275 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:35.678901 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:35.873891 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:35.877125 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:35.958813 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:36.172129 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:36.376504 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:36.377111 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:36.458833 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:36.676847 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:36.873590 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:36.875792 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:36.958142 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:37.171790 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:37.375450 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:37.376464 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:37.389684 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:37.457879 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:37.671718 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:37.873720 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:37.876816 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:37.957914 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:38.171413 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:38.374211 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:38.375856 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:38.457687 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:38.671556 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:38.876475 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:38.880387 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:38.958002 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:39.171727 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:39.373464 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:39.376927 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:39.458407 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:39.673332 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:39.873311 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:39.875713 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:39.889632 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:39.962068 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:40.172326 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:40.376474 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:40.376782 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:40.458432 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:40.671437 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:40.874274 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:40.875752 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:40.958211 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:41.172056 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:41.374340 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:41.376887 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:41.460033 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:41.671248 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:41.873895 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:41.876298 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:41.890057 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:41.958164 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:42.171891 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:42.374597 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:42.377572 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:42.458837 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:42.671697 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:42.874267 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:42.876208 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:42.958489 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:43.171773 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:43.373901 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:43.377368 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:43.457901 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:43.671261 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:43.873438 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:43.876885 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:43.958627 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:44.171605 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:44.374694 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:44.376527 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:44.389137 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:44.458047 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:44.679369 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:44.873715 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:44.876910 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:44.958715 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:45.172001 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:45.376091 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:45.379185 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:45.459069 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:45.671996 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:45.873324 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:45.876591 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:45.959237 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:46.172625 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:46.373440 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:46.376144 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:46.389935 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:46.458757 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:46.671935 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:46.873853 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:46.876390 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:46.958143 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:47.171749 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:47.373851 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:47.376245 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:47.458251 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:47.671610 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:47.874760 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:47.876585 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:47.958706 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:48.172124 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:48.373945 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:48.375971 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:48.458292 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:48.671382 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:48.873242 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:48.875864 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:48.889987 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:48.958455 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:49.171522 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:49.374132 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:49.375849 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:49.458442 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:49.673100 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:49.874339 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:49.876529 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:49.972756 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:50.172202 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:50.376310 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:50.377061 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:50.458778 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:50.671955 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:50.873636 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:50.877043 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:50.958805 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:51.172151 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:51.374769 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:51.376907 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:51.389623 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:51.458231 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:51.672040 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:51.873434 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:51.875638 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:51.958795 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:52.172146 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:52.374758 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:52.375813 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:52.458395 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:52.672218 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:52.873284 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:52.876031 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:52.958640 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:53.171849 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:53.373892 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:53.376103 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.389802 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:53.458022 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:53.671296 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:53.873832 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:53.876978 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.958067 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:54.172127 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:54.374031 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:54.376282 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:54.458225 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:54.671400 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:54.873882 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:54.876310 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:54.958153 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:55.171697 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:55.374121 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:55.375886 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.458821 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:55.671663 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:55.874128 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:55.875921 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.890695 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:55.958018 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:56.172031 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:56.375128 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.377781 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:56.458780 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:56.672114 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:56.873803 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.875742 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:56.958678 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:57.171773 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:57.373322 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:57.375648 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:57.459024 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:57.671641 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:57.873146 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:57.875830 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:57.958640 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:58.173949 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:58.373979 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:58.376526 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:58.389472 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:50:58.461909 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:58.671944 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:58.873913 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:58.876225 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:58.957943 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:59.171345 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:59.373729 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:59.376883 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:59.458873 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:59.670832 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:59.873638 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:59.876749 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:59.958707 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:00.204058 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:00.383864 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:00.384114 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:00.392279 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:51:00.458532 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:00.671040 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:00.873426 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:00.876657 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:00.958965 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:01.171428 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:01.374410 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:01.375486 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.458281 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:01.671812 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:01.874623 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:01.876454 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.958138 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:02.171805 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:02.374541 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.376380 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:02.459194 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:02.671999 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:02.874966 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.875330 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:02.889457 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:51:02.958801 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:03.172024 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:03.375545 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:03.376285 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:03.459005 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:03.671922 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:03.874663 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:03.876529 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:03.958490 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:04.171543 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:04.374028 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:04.376471 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:04.458854 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:04.671763 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:04.873960 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:04.876662 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:04.889724 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:51:04.958448 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:05.171258 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:05.374142 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:05.375852 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:05.459101 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:05.671181 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:05.873219 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:05.875553 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:05.959805 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:06.171292 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:06.376065 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:06.376873 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:06.458778 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:06.672289 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:06.874660 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:06.876191 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:06.890130 1566127 node_ready.go:53] node "addons-554168" has status "Ready":"False"
	I0805 22:51:06.957935 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:07.171736 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:07.374614 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:07.376399 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.458734 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:07.672063 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:07.873703 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:07.876167 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.958390 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:08.192479 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:08.378249 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.383614 1566127 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0805 22:51:08.383641 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:08.394945 1566127 node_ready.go:49] node "addons-554168" has status "Ready":"True"
	I0805 22:51:08.394971 1566127 node_ready.go:38] duration metric: took 45.008683249s for node "addons-554168" to be "Ready" ...
	I0805 22:51:08.394981 1566127 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 22:51:08.409843 1566127 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-prz4h" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:08.513744 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:08.672989 1566127 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0805 22:51:08.673016 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:08.876777 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.880867 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:08.966261 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:09.173090 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:09.379795 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:09.388279 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:09.458512 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:09.672285 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:09.874385 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:09.876929 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:09.964413 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:10.201488 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:10.398423 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:10.399515 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:10.418870 1566127 pod_ready.go:102] pod "coredns-7db6d8ff4d-prz4h" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:10.458285 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:10.674418 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:10.874847 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:10.877424 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:10.920588 1566127 pod_ready.go:92] pod "coredns-7db6d8ff4d-prz4h" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.920668 1566127 pod_ready.go:81] duration metric: took 2.510791132s for pod "coredns-7db6d8ff4d-prz4h" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.920708 1566127 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.932533 1566127 pod_ready.go:92] pod "etcd-addons-554168" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.932627 1566127 pod_ready.go:81] duration metric: took 11.879425ms for pod "etcd-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.932658 1566127 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.939769 1566127 pod_ready.go:92] pod "kube-apiserver-addons-554168" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.939839 1566127 pod_ready.go:81] duration metric: took 7.159137ms for pod "kube-apiserver-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.939867 1566127 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.946886 1566127 pod_ready.go:92] pod "kube-controller-manager-addons-554168" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.946958 1566127 pod_ready.go:81] duration metric: took 7.067832ms for pod "kube-controller-manager-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.946986 1566127 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp29n" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.953739 1566127 pod_ready.go:92] pod "kube-proxy-lp29n" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:10.953812 1566127 pod_ready.go:81] duration metric: took 6.805501ms for pod "kube-proxy-lp29n" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.953840 1566127 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:10.959446 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:11.173121 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:11.317590 1566127 pod_ready.go:92] pod "kube-scheduler-addons-554168" in "kube-system" namespace has status "Ready":"True"
	I0805 22:51:11.317614 1566127 pod_ready.go:81] duration metric: took 363.753096ms for pod "kube-scheduler-addons-554168" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:11.317627 1566127 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace to be "Ready" ...
	I0805 22:51:11.374112 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:11.377178 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:11.458136 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:11.672984 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:11.874345 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:11.877076 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:11.960523 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:12.172468 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:12.374432 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:12.377395 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:12.458292 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:12.675090 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:12.886231 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:12.887700 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:12.959054 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:13.173989 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:13.337661 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:13.376673 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:13.380654 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:13.458926 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:13.673942 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:13.877743 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:13.883346 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:13.958714 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:14.173498 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:14.378059 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:14.385019 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:14.460285 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:14.674713 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:14.878902 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:14.880902 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:14.958778 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:15.176120 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:15.377650 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:15.380881 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:15.459190 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:15.674108 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:15.833823 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:15.882639 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:15.883582 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:15.959956 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:16.174369 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:16.374634 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:16.382465 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:16.459364 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:16.672963 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:16.875326 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:16.878265 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:16.959334 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:17.173596 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:17.374355 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:17.377290 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:17.458048 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:17.672544 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:17.873865 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:17.877502 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:17.958846 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:18.174479 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:18.325339 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:18.384124 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:18.385413 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:18.459362 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:18.675180 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:18.873635 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:18.877174 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:18.959432 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.174065 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.373948 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.383505 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.458878 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.674604 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.885594 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.890601 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.959405 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:20.174807 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:20.379044 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:20.380277 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:20.461514 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:20.674040 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:20.826232 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:20.875349 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:20.880774 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:20.959365 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:21.174552 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:21.378438 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:21.382967 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:21.459977 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:21.674151 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:21.876232 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:21.880690 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:21.959917 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:22.174818 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:22.382349 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:22.391560 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:22.460042 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:22.673889 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:22.874134 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:22.878110 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:22.966277 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:23.176425 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:23.326644 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:23.377275 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:23.379621 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:23.458493 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:23.673846 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:23.874006 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:23.877687 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:23.962000 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:24.172862 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:24.375105 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:24.381855 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:24.460083 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:24.674097 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:24.874840 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:24.879471 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:24.959520 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:25.173704 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:25.374116 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:25.378005 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:25.458585 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:25.672713 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:25.855205 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:25.877259 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:25.880241 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:25.962456 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:26.172849 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:26.375799 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:26.377384 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:26.459394 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:26.672747 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:26.874314 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:26.877388 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:26.962825 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:27.173585 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:27.377264 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:27.382322 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:27.460138 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:27.673671 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:27.875035 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:27.883321 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:27.959207 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:28.175557 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:28.325061 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:28.376264 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:28.379956 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:28.459150 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:28.690306 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:28.875167 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:28.876925 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:28.958386 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:29.173515 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:29.376094 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:29.378486 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:29.460621 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:29.675558 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:29.875716 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:29.878902 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:29.958784 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:30.174827 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:30.376541 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:30.384337 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:30.459290 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:30.673941 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:30.827469 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:30.877849 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:30.887836 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:30.959411 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:31.173452 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:31.375156 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:31.378970 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:31.458947 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:31.706663 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:31.898397 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:31.900230 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:31.964948 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.175160 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.380444 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:32.383869 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:32.458848 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.673744 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.874251 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:32.877208 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:32.958960 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:33.176651 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:33.324185 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:33.375306 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:33.378038 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:33.458682 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:33.674077 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:33.874972 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:33.878943 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:33.958704 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:34.172964 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:34.373751 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:34.377427 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:34.458154 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:34.679341 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:34.876109 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:34.889740 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:34.965056 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:35.173647 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:35.328436 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:35.383676 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:35.385279 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:35.460967 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:35.673554 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:35.874370 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:35.878286 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:35.958564 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:36.173214 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:36.375602 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:36.378208 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:36.458950 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:36.673490 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:36.875924 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:36.878631 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:36.958070 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:37.177913 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:37.373884 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:37.377044 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:37.458663 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:37.673576 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:37.825334 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:37.879213 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:37.885886 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:37.963936 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:38.174790 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:38.376637 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:38.380183 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:38.459436 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:38.675719 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:38.873783 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:38.877871 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:38.960175 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:39.175778 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:39.378270 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:39.380082 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:39.459433 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:39.679750 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:39.875234 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:39.881466 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:39.959025 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:40.174404 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:40.330090 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:40.382393 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:40.383976 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:40.464959 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:40.675276 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:40.877780 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:40.880692 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:40.958584 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:41.177541 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:41.374692 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:41.378048 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:41.459822 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:41.678099 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:41.874486 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:41.876363 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:41.958327 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:42.173452 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:42.376076 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:42.383470 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:42.459301 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:42.673494 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:42.824374 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:42.874317 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:42.878270 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:42.959990 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:43.172659 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:43.378247 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:43.378764 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:43.459041 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:43.673255 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:43.882027 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:43.885425 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:43.959035 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:44.173389 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:44.375228 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:44.377800 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:44.458727 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:44.673416 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:44.874716 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:44.877248 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:44.958059 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:45.183585 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:45.328040 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:45.384622 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:45.398075 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:45.460242 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:45.690926 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:45.874088 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:45.877189 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:45.958272 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.173295 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.375118 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:46.382559 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:46.462162 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.674513 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.880542 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:46.883396 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:46.960470 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:47.173269 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:47.404137 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:47.413718 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:47.458856 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:47.681256 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:47.824703 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:47.874732 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:47.882562 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:47.958678 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.176243 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.376341 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:48.388641 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:48.458469 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.675016 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.878084 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:48.881966 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:48.958605 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:49.174716 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:49.374976 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:49.377631 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:49.458315 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:49.673293 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:49.824963 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:49.874312 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:49.877289 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:49.958736 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:50.188700 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:50.374349 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:50.377319 1566127 kapi.go:107] duration metric: took 1m22.005554259s to wait for kubernetes.io/minikube-addons=registry ...
	I0805 22:51:50.458824 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:50.673563 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:50.874353 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:50.958079 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:51.172996 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:51.375940 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:51.459497 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:51.673960 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:51.874290 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:51.958908 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:52.174840 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:52.326270 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:52.375660 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:52.458600 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:52.674452 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:52.875201 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:52.958908 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:53.173941 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:53.375781 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:53.459576 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:53.673906 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:53.874755 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:53.958581 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:54.174123 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:54.326793 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:54.375367 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:54.459123 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:54.677520 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:54.875472 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:54.960391 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:55.174032 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:55.375119 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:55.469304 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:55.673565 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:55.874350 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:55.958654 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:56.173305 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:56.374777 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:56.458888 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:56.682744 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:56.825703 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:56.876304 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:56.958841 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:57.176537 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:57.375536 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:57.458861 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:57.673312 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:57.879517 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:57.959555 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:58.174408 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:58.375129 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:58.461877 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:58.673770 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:58.874329 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:58.958983 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:59.173863 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:59.324759 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:51:59.374700 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:59.458588 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:59.672673 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:59.874774 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:59.958581 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:00.189576 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:00.414910 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:00.459354 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:00.675300 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:00.875855 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:00.958871 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:01.175582 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:01.325171 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:01.376813 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:01.459508 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:01.674334 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:01.884488 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:01.958864 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:02.174081 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:02.373937 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:02.458299 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:02.675092 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:02.874621 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:02.959146 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:03.173556 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:03.326139 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:03.374626 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:03.458383 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:03.672623 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:03.873696 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:03.958326 1566127 kapi.go:107] duration metric: took 1m31.003657779s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0805 22:52:03.960031 1566127 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-554168 cluster.
	I0805 22:52:03.961788 1566127 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0805 22:52:03.963383 1566127 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0805 22:52:04.173072 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:04.373945 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:04.672864 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:04.874229 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:05.175458 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:05.374355 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:05.673281 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:05.829304 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:05.874605 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:06.172529 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:06.374455 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:06.673491 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:06.874397 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:07.173161 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:07.375308 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:07.672985 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:07.874036 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:08.172521 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:08.325677 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:08.374566 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:08.672666 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:08.876650 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:09.174359 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:09.375628 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:09.673925 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:09.876082 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:10.172720 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:10.374793 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:10.673305 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:10.824835 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:10.874348 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:11.172707 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:11.374709 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:11.674459 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:11.875179 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:12.173594 1566127 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:12.386639 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:12.673874 1566127 kapi.go:107] duration metric: took 1m43.506743484s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0805 22:52:12.874448 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:13.326090 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:13.374514 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:13.874478 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:14.374043 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:14.877395 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:15.338072 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:15.376290 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:15.874773 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:16.374438 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:16.874350 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:17.374934 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:17.827975 1566127 pod_ready.go:102] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"False"
	I0805 22:52:17.875237 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.374313 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.832928 1566127 pod_ready.go:92] pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace has status "Ready":"True"
	I0805 22:52:18.832959 1566127 pod_ready.go:81] duration metric: took 1m7.51532442s for pod "metrics-server-c59844bb4-4dgqd" in "kube-system" namespace to be "Ready" ...
	I0805 22:52:18.832972 1566127 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vngm6" in "kube-system" namespace to be "Ready" ...
	I0805 22:52:18.842526 1566127 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-vngm6" in "kube-system" namespace has status "Ready":"True"
	I0805 22:52:18.842554 1566127 pod_ready.go:81] duration metric: took 9.572401ms for pod "nvidia-device-plugin-daemonset-vngm6" in "kube-system" namespace to be "Ready" ...
	I0805 22:52:18.842577 1566127 pod_ready.go:38] duration metric: took 1m10.447580242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 22:52:18.842593 1566127 api_server.go:52] waiting for apiserver process to appear ...
	I0805 22:52:18.843273 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 22:52:18.843345 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 22:52:18.893192 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.954292 1566127 cri.go:89] found id: "9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:18.954317 1566127 cri.go:89] found id: ""
	I0805 22:52:18.954327 1566127 logs.go:276] 1 containers: [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19]
	I0805 22:52:18.954948 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:18.976360 1566127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 22:52:18.976435 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 22:52:19.112787 1566127 cri.go:89] found id: "8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:19.112810 1566127 cri.go:89] found id: ""
	I0805 22:52:19.112819 1566127 logs.go:276] 1 containers: [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778]
	I0805 22:52:19.112897 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.125060 1566127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 22:52:19.125147 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 22:52:19.202655 1566127 cri.go:89] found id: "09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:19.202681 1566127 cri.go:89] found id: ""
	I0805 22:52:19.202689 1566127 logs.go:276] 1 containers: [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080]
	I0805 22:52:19.202746 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.206922 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 22:52:19.206997 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 22:52:19.286716 1566127 cri.go:89] found id: "0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:19.286740 1566127 cri.go:89] found id: ""
	I0805 22:52:19.286749 1566127 logs.go:276] 1 containers: [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde]
	I0805 22:52:19.286814 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.292053 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 22:52:19.292138 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 22:52:19.355216 1566127 cri.go:89] found id: "42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:19.355241 1566127 cri.go:89] found id: ""
	I0805 22:52:19.355249 1566127 logs.go:276] 1 containers: [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb]
	I0805 22:52:19.355316 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.361963 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 22:52:19.362039 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 22:52:19.375748 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:19.426252 1566127 cri.go:89] found id: "63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:19.426280 1566127 cri.go:89] found id: ""
	I0805 22:52:19.426289 1566127 logs.go:276] 1 containers: [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef]
	I0805 22:52:19.426358 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.434080 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 22:52:19.434170 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 22:52:19.496471 1566127 cri.go:89] found id: "b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:19.496493 1566127 cri.go:89] found id: ""
	I0805 22:52:19.496508 1566127 logs.go:276] 1 containers: [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86]
	I0805 22:52:19.496619 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:19.505747 1566127 logs.go:123] Gathering logs for describe nodes ...
	I0805 22:52:19.505773 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 22:52:19.770483 1566127 logs.go:123] Gathering logs for kube-apiserver [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19] ...
	I0805 22:52:19.770514 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:19.853681 1566127 logs.go:123] Gathering logs for kube-controller-manager [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef] ...
	I0805 22:52:19.853722 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:19.874955 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:19.949182 1566127 logs.go:123] Gathering logs for container status ...
	I0805 22:52:19.949263 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 22:52:20.084140 1566127 logs.go:123] Gathering logs for kubelet ...
	I0805 22:52:20.084217 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0805 22:52:20.145430 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.107369    1564 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.145647 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.107419    1564 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.150814 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140125    1564 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151055 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140169    1564 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151243 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140874    1564 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151446 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140922    1564 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151632 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140989    1564 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.151841 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141013    1564 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152006 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.141116    1564 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152191 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152381 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152663 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.152867 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.153076 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:20.197558 1566127 logs.go:123] Gathering logs for etcd [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778] ...
	I0805 22:52:20.197602 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:20.273120 1566127 logs.go:123] Gathering logs for coredns [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080] ...
	I0805 22:52:20.276083 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:20.332603 1566127 logs.go:123] Gathering logs for kube-scheduler [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde] ...
	I0805 22:52:20.332690 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:20.378076 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:20.417592 1566127 logs.go:123] Gathering logs for kube-proxy [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb] ...
	I0805 22:52:20.417625 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:20.484498 1566127 logs.go:123] Gathering logs for kindnet [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86] ...
	I0805 22:52:20.484530 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:20.554925 1566127 logs.go:123] Gathering logs for CRI-O ...
	I0805 22:52:20.554958 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 22:52:20.677357 1566127 logs.go:123] Gathering logs for dmesg ...
	I0805 22:52:20.677404 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 22:52:20.710010 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:20.710038 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0805 22:52:20.710087 1566127 out.go:239] X Problems detected in kubelet:
	W0805 22:52:20.710100 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.710110 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.710119 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.710126 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:20.710132 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:20.710138 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:20.710144 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:52:20.876013 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:21.374257 1566127 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:21.874250 1566127 kapi.go:107] duration metric: took 1m53.504836551s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0805 22:52:21.876178 1566127 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0805 22:52:21.877785 1566127 addons.go:510] duration metric: took 2m0.685991291s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin storage-provisioner-rancher ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0805 22:52:30.711717 1566127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 22:52:30.726224 1566127 api_server.go:72] duration metric: took 2m9.534856939s to wait for apiserver process to appear ...
	I0805 22:52:30.726250 1566127 api_server.go:88] waiting for apiserver healthz status ...
	I0805 22:52:30.726283 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 22:52:30.726337 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 22:52:30.764705 1566127 cri.go:89] found id: "9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:30.764726 1566127 cri.go:89] found id: ""
	I0805 22:52:30.764734 1566127 logs.go:276] 1 containers: [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19]
	I0805 22:52:30.764791 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.768211 1566127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 22:52:30.768278 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 22:52:30.808094 1566127 cri.go:89] found id: "8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:30.808114 1566127 cri.go:89] found id: ""
	I0805 22:52:30.808122 1566127 logs.go:276] 1 containers: [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778]
	I0805 22:52:30.808178 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.811721 1566127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 22:52:30.811796 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 22:52:30.850590 1566127 cri.go:89] found id: "09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:30.850614 1566127 cri.go:89] found id: ""
	I0805 22:52:30.850622 1566127 logs.go:276] 1 containers: [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080]
	I0805 22:52:30.850679 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.854292 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 22:52:30.854365 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 22:52:30.893275 1566127 cri.go:89] found id: "0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:30.893299 1566127 cri.go:89] found id: ""
	I0805 22:52:30.893307 1566127 logs.go:276] 1 containers: [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde]
	I0805 22:52:30.893368 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.896962 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 22:52:30.897035 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 22:52:30.937066 1566127 cri.go:89] found id: "42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:30.937089 1566127 cri.go:89] found id: ""
	I0805 22:52:30.937097 1566127 logs.go:276] 1 containers: [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb]
	I0805 22:52:30.937160 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.940629 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 22:52:30.940748 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 22:52:30.979170 1566127 cri.go:89] found id: "63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:30.979193 1566127 cri.go:89] found id: ""
	I0805 22:52:30.979209 1566127 logs.go:276] 1 containers: [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef]
	I0805 22:52:30.979271 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:30.982580 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 22:52:30.982644 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 22:52:31.023003 1566127 cri.go:89] found id: "b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:31.023026 1566127 cri.go:89] found id: ""
	I0805 22:52:31.023033 1566127 logs.go:276] 1 containers: [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86]
	I0805 22:52:31.023094 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:31.026613 1566127 logs.go:123] Gathering logs for kube-apiserver [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19] ...
	I0805 22:52:31.026647 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:31.078975 1566127 logs.go:123] Gathering logs for etcd [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778] ...
	I0805 22:52:31.079015 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:31.126524 1566127 logs.go:123] Gathering logs for coredns [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080] ...
	I0805 22:52:31.126562 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:31.172237 1566127 logs.go:123] Gathering logs for kube-scheduler [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde] ...
	I0805 22:52:31.172267 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:31.222815 1566127 logs.go:123] Gathering logs for kube-controller-manager [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef] ...
	I0805 22:52:31.222847 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:31.295478 1566127 logs.go:123] Gathering logs for kindnet [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86] ...
	I0805 22:52:31.295517 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:31.343130 1566127 logs.go:123] Gathering logs for dmesg ...
	I0805 22:52:31.343166 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 22:52:31.362607 1566127 logs.go:123] Gathering logs for describe nodes ...
	I0805 22:52:31.362637 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 22:52:31.517835 1566127 logs.go:123] Gathering logs for container status ...
	I0805 22:52:31.517868 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 22:52:31.580676 1566127 logs.go:123] Gathering logs for CRI-O ...
	I0805 22:52:31.580713 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 22:52:31.676789 1566127 logs.go:123] Gathering logs for kubelet ...
	I0805 22:52:31.676868 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0805 22:52:31.724835 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.107369    1564 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.725112 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.107419    1564 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.727467 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140125    1564 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.727659 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140169    1564 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.727844 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140874    1564 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728047 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140922    1564 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728238 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140989    1564 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728448 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141013    1564 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728621 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.141116    1564 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728807 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.728993 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.729202 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.729387 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.729592 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:31.763081 1566127 logs.go:123] Gathering logs for kube-proxy [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb] ...
	I0805 22:52:31.763110 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:31.800452 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:31.800478 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0805 22:52:31.800526 1566127 out.go:239] X Problems detected in kubelet:
	W0805 22:52:31.800542 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.800577 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.800594 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.800602 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:31.800611 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:31.800623 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:31.800629 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:52:41.801964 1566127 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0805 22:52:41.810635 1566127 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0805 22:52:41.812208 1566127 api_server.go:141] control plane version: v1.30.3
	I0805 22:52:41.812240 1566127 api_server.go:131] duration metric: took 11.085981662s to wait for apiserver health ...
	I0805 22:52:41.812249 1566127 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 22:52:41.812271 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0805 22:52:41.812334 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0805 22:52:41.854122 1566127 cri.go:89] found id: "9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:41.854142 1566127 cri.go:89] found id: ""
	I0805 22:52:41.854150 1566127 logs.go:276] 1 containers: [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19]
	I0805 22:52:41.854210 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:41.857636 1566127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0805 22:52:41.857707 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0805 22:52:41.895789 1566127 cri.go:89] found id: "8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:41.895865 1566127 cri.go:89] found id: ""
	I0805 22:52:41.895885 1566127 logs.go:276] 1 containers: [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778]
	I0805 22:52:41.895974 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:41.899389 1566127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0805 22:52:41.899457 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0805 22:52:41.942516 1566127 cri.go:89] found id: "09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:41.942588 1566127 cri.go:89] found id: ""
	I0805 22:52:41.942603 1566127 logs.go:276] 1 containers: [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080]
	I0805 22:52:41.942664 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:41.946535 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0805 22:52:41.946620 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0805 22:52:41.987110 1566127 cri.go:89] found id: "0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:41.987133 1566127 cri.go:89] found id: ""
	I0805 22:52:41.987142 1566127 logs.go:276] 1 containers: [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde]
	I0805 22:52:41.987204 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:41.990884 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0805 22:52:41.990958 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0805 22:52:42.055800 1566127 cri.go:89] found id: "42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:42.055823 1566127 cri.go:89] found id: ""
	I0805 22:52:42.055831 1566127 logs.go:276] 1 containers: [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb]
	I0805 22:52:42.055889 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:42.059860 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0805 22:52:42.059940 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0805 22:52:42.104956 1566127 cri.go:89] found id: "63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:42.104984 1566127 cri.go:89] found id: ""
	I0805 22:52:42.104991 1566127 logs.go:276] 1 containers: [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef]
	I0805 22:52:42.105059 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:42.109486 1566127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0805 22:52:42.109584 1566127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0805 22:52:42.159575 1566127 cri.go:89] found id: "b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:42.159600 1566127 cri.go:89] found id: ""
	I0805 22:52:42.159609 1566127 logs.go:276] 1 containers: [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86]
	I0805 22:52:42.159677 1566127 ssh_runner.go:195] Run: which crictl
	I0805 22:52:42.164177 1566127 logs.go:123] Gathering logs for dmesg ...
	I0805 22:52:42.164214 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 22:52:42.185524 1566127 logs.go:123] Gathering logs for describe nodes ...
	I0805 22:52:42.185717 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 22:52:42.343018 1566127 logs.go:123] Gathering logs for kube-apiserver [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19] ...
	I0805 22:52:42.343051 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19"
	I0805 22:52:42.398119 1566127 logs.go:123] Gathering logs for kindnet [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86] ...
	I0805 22:52:42.398153 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86"
	I0805 22:52:42.448477 1566127 logs.go:123] Gathering logs for CRI-O ...
	I0805 22:52:42.448511 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0805 22:52:42.541758 1566127 logs.go:123] Gathering logs for kubelet ...
	I0805 22:52:42.541797 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0805 22:52:42.593894 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.107369    1564 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.594635 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.107419    1564 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597126 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140125    1564 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597324 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140169    1564 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597511 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140874    1564 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597719 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.140922    1564 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.597937 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.140989    1564 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598159 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141013    1564 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598349 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.141116    1564 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598536 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598724 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.598967 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.599159 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.599366 1566127 logs.go:138] Found kubelet problem: Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:42.634056 1566127 logs.go:123] Gathering logs for etcd [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778] ...
	I0805 22:52:42.634088 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778"
	I0805 22:52:42.681807 1566127 logs.go:123] Gathering logs for coredns [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080] ...
	I0805 22:52:42.681841 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080"
	I0805 22:52:42.734224 1566127 logs.go:123] Gathering logs for kube-scheduler [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde] ...
	I0805 22:52:42.734255 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde"
	I0805 22:52:42.781028 1566127 logs.go:123] Gathering logs for kube-proxy [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb] ...
	I0805 22:52:42.781065 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb"
	I0805 22:52:42.819987 1566127 logs.go:123] Gathering logs for kube-controller-manager [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef] ...
	I0805 22:52:42.820017 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef"
	I0805 22:52:42.912058 1566127 logs.go:123] Gathering logs for container status ...
	I0805 22:52:42.912089 1566127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 22:52:42.962042 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:42.962071 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0805 22:52:42.962144 1566127 out.go:239] X Problems detected in kubelet:
	W0805 22:52:42.962160 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.141129    1564 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.962167 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.145708    1564 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.962176 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.145761    1564 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-554168" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.962189 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: W0805 22:51:08.146180    1564 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	W0805 22:52:42.962362 1566127 out.go:239]   Aug 05 22:51:08 addons-554168 kubelet[1564]: E0805 22:51:08.146210    1564 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-554168" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-554168' and this object
	I0805 22:52:42.962371 1566127 out.go:304] Setting ErrFile to fd 2...
	I0805 22:52:42.962382 1566127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:52:52.976729 1566127 system_pods.go:59] 18 kube-system pods found
	I0805 22:52:52.976775 1566127 system_pods.go:61] "coredns-7db6d8ff4d-prz4h" [278434ff-e033-485a-b4bc-320db42e8d40] Running
	I0805 22:52:52.976782 1566127 system_pods.go:61] "csi-hostpath-attacher-0" [08e40914-ba9f-4ff2-88ef-d16dc5d650ef] Running
	I0805 22:52:52.976787 1566127 system_pods.go:61] "csi-hostpath-resizer-0" [1c4036fd-0450-4070-bea9-d46b5d5a51a6] Running
	I0805 22:52:52.976792 1566127 system_pods.go:61] "csi-hostpathplugin-pz5t5" [3d8afa71-9759-47b2-840d-51f8c0a66d69] Running
	I0805 22:52:52.976799 1566127 system_pods.go:61] "etcd-addons-554168" [aa854717-a161-49ba-b27b-91967097bffe] Running
	I0805 22:52:52.976805 1566127 system_pods.go:61] "kindnet-jtck6" [6a21b8aa-054e-4f1d-88df-3b7ace40541b] Running
	I0805 22:52:52.976810 1566127 system_pods.go:61] "kube-apiserver-addons-554168" [a31835e7-19cf-4813-918e-3c3cb3013d45] Running
	I0805 22:52:52.976820 1566127 system_pods.go:61] "kube-controller-manager-addons-554168" [314b12cf-3dfc-45fe-9d28-aa0a7fdb65d5] Running
	I0805 22:52:52.976825 1566127 system_pods.go:61] "kube-ingress-dns-minikube" [fa78f0fa-4656-494a-8b6f-92f40e4c8f8b] Running
	I0805 22:52:52.976833 1566127 system_pods.go:61] "kube-proxy-lp29n" [327a3427-7590-4179-951e-c53d7d42f072] Running
	I0805 22:52:52.976838 1566127 system_pods.go:61] "kube-scheduler-addons-554168" [c1e4f7f4-71e0-4719-a35b-6224c4f46acc] Running
	I0805 22:52:52.976845 1566127 system_pods.go:61] "metrics-server-c59844bb4-4dgqd" [87a4cfae-8eae-4755-8efe-9e869f5ea69e] Running
	I0805 22:52:52.976850 1566127 system_pods.go:61] "nvidia-device-plugin-daemonset-vngm6" [bc68d922-6356-4b7c-a0af-9f0e70a94548] Running
	I0805 22:52:52.976854 1566127 system_pods.go:61] "registry-698f998955-x6xxq" [4ae86949-feca-437d-8b71-1b2bea971616] Running
	I0805 22:52:52.976858 1566127 system_pods.go:61] "registry-proxy-5pp4p" [03aac67e-c40e-4703-995f-88bab30fa562] Running
	I0805 22:52:52.976872 1566127 system_pods.go:61] "snapshot-controller-745499f584-lbm9t" [4041ad86-2418-4108-a5e7-e00452c8eb62] Running
	I0805 22:52:52.976882 1566127 system_pods.go:61] "snapshot-controller-745499f584-lbzrh" [54002dfa-8dcf-42f4-b80b-24154275fc76] Running
	I0805 22:52:52.976886 1566127 system_pods.go:61] "storage-provisioner" [97eb7b7b-4406-412d-94ec-49b93cfc1495] Running
	I0805 22:52:52.976893 1566127 system_pods.go:74] duration metric: took 11.164637418s to wait for pod list to return data ...
	I0805 22:52:52.976901 1566127 default_sa.go:34] waiting for default service account to be created ...
	I0805 22:52:52.979340 1566127 default_sa.go:45] found service account: "default"
	I0805 22:52:52.979365 1566127 default_sa.go:55] duration metric: took 2.457125ms for default service account to be created ...
	I0805 22:52:52.979375 1566127 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 22:52:52.988986 1566127 system_pods.go:86] 18 kube-system pods found
	I0805 22:52:52.989020 1566127 system_pods.go:89] "coredns-7db6d8ff4d-prz4h" [278434ff-e033-485a-b4bc-320db42e8d40] Running
	I0805 22:52:52.989027 1566127 system_pods.go:89] "csi-hostpath-attacher-0" [08e40914-ba9f-4ff2-88ef-d16dc5d650ef] Running
	I0805 22:52:52.989032 1566127 system_pods.go:89] "csi-hostpath-resizer-0" [1c4036fd-0450-4070-bea9-d46b5d5a51a6] Running
	I0805 22:52:52.989037 1566127 system_pods.go:89] "csi-hostpathplugin-pz5t5" [3d8afa71-9759-47b2-840d-51f8c0a66d69] Running
	I0805 22:52:52.989042 1566127 system_pods.go:89] "etcd-addons-554168" [aa854717-a161-49ba-b27b-91967097bffe] Running
	I0805 22:52:52.989047 1566127 system_pods.go:89] "kindnet-jtck6" [6a21b8aa-054e-4f1d-88df-3b7ace40541b] Running
	I0805 22:52:52.989051 1566127 system_pods.go:89] "kube-apiserver-addons-554168" [a31835e7-19cf-4813-918e-3c3cb3013d45] Running
	I0805 22:52:52.989055 1566127 system_pods.go:89] "kube-controller-manager-addons-554168" [314b12cf-3dfc-45fe-9d28-aa0a7fdb65d5] Running
	I0805 22:52:52.989059 1566127 system_pods.go:89] "kube-ingress-dns-minikube" [fa78f0fa-4656-494a-8b6f-92f40e4c8f8b] Running
	I0805 22:52:52.989063 1566127 system_pods.go:89] "kube-proxy-lp29n" [327a3427-7590-4179-951e-c53d7d42f072] Running
	I0805 22:52:52.989068 1566127 system_pods.go:89] "kube-scheduler-addons-554168" [c1e4f7f4-71e0-4719-a35b-6224c4f46acc] Running
	I0805 22:52:52.989074 1566127 system_pods.go:89] "metrics-server-c59844bb4-4dgqd" [87a4cfae-8eae-4755-8efe-9e869f5ea69e] Running
	I0805 22:52:52.989079 1566127 system_pods.go:89] "nvidia-device-plugin-daemonset-vngm6" [bc68d922-6356-4b7c-a0af-9f0e70a94548] Running
	I0805 22:52:52.989087 1566127 system_pods.go:89] "registry-698f998955-x6xxq" [4ae86949-feca-437d-8b71-1b2bea971616] Running
	I0805 22:52:52.989092 1566127 system_pods.go:89] "registry-proxy-5pp4p" [03aac67e-c40e-4703-995f-88bab30fa562] Running
	I0805 22:52:52.989099 1566127 system_pods.go:89] "snapshot-controller-745499f584-lbm9t" [4041ad86-2418-4108-a5e7-e00452c8eb62] Running
	I0805 22:52:52.989104 1566127 system_pods.go:89] "snapshot-controller-745499f584-lbzrh" [54002dfa-8dcf-42f4-b80b-24154275fc76] Running
	I0805 22:52:52.989114 1566127 system_pods.go:89] "storage-provisioner" [97eb7b7b-4406-412d-94ec-49b93cfc1495] Running
	I0805 22:52:52.989122 1566127 system_pods.go:126] duration metric: took 9.740959ms to wait for k8s-apps to be running ...
	I0805 22:52:52.989130 1566127 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 22:52:52.989195 1566127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 22:52:53.002197 1566127 system_svc.go:56] duration metric: took 13.034415ms WaitForService to wait for kubelet
	I0805 22:52:53.002229 1566127 kubeadm.go:582] duration metric: took 2m31.810865348s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 22:52:53.002253 1566127 node_conditions.go:102] verifying NodePressure condition ...
	I0805 22:52:53.007160 1566127 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0805 22:52:53.007196 1566127 node_conditions.go:123] node cpu capacity is 2
	I0805 22:52:53.007208 1566127 node_conditions.go:105] duration metric: took 4.948767ms to run NodePressure ...
	I0805 22:52:53.007223 1566127 start.go:241] waiting for startup goroutines ...
	I0805 22:52:53.007230 1566127 start.go:246] waiting for cluster config update ...
	I0805 22:52:53.007247 1566127 start.go:255] writing updated cluster config ...
	I0805 22:52:53.007580 1566127 ssh_runner.go:195] Run: rm -f paused
	I0805 22:52:53.323183 1566127 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 22:52:53.326795 1566127 out.go:177] * Done! kubectl is now configured to use "addons-554168" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.065006518Z" level=info msg="Removed pod sandbox: 02744ce5792767fa9b2d00ecc74ff603ad2e991c71a1701f5067ac415d3a98d9" id=3ccc68f0-5c0f-4f93-8c2d-4200d5962453 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.065529648Z" level=info msg="Stopping pod sandbox: 44739f69558bd5d3b1aaa1dc57d908d78eca553457acb7a62a53191538abb0d3" id=b538b0fd-aabb-4c46-821c-fcf8f6640a2c name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.065575252Z" level=info msg="Stopped pod sandbox (already stopped): 44739f69558bd5d3b1aaa1dc57d908d78eca553457acb7a62a53191538abb0d3" id=b538b0fd-aabb-4c46-821c-fcf8f6640a2c name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.065957074Z" level=info msg="Removing pod sandbox: 44739f69558bd5d3b1aaa1dc57d908d78eca553457acb7a62a53191538abb0d3" id=6fb20e2c-92f4-4b1f-ab3c-2244260b6fd0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.076445105Z" level=info msg="Removed pod sandbox: 44739f69558bd5d3b1aaa1dc57d908d78eca553457acb7a62a53191538abb0d3" id=6fb20e2c-92f4-4b1f-ab3c-2244260b6fd0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.077137711Z" level=info msg="Stopping pod sandbox: 902568ebc6a9e6e399d8c59a49cc4791fdb5506b4933461c03aff958d98cce7d" id=8f44b86f-00e4-4fcd-8c41-1be489288e96 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.077184521Z" level=info msg="Stopped pod sandbox (already stopped): 902568ebc6a9e6e399d8c59a49cc4791fdb5506b4933461c03aff958d98cce7d" id=8f44b86f-00e4-4fcd-8c41-1be489288e96 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.077592894Z" level=info msg="Removing pod sandbox: 902568ebc6a9e6e399d8c59a49cc4791fdb5506b4933461c03aff958d98cce7d" id=c299a778-f534-4161-8f59-bb2cee554194 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 05 22:58:08 addons-554168 crio[967]: time="2024-08-05 22:58:08.087578768Z" level=info msg="Removed pod sandbox: 902568ebc6a9e6e399d8c59a49cc4791fdb5506b4933461c03aff958d98cce7d" id=c299a778-f534-4161-8f59-bb2cee554194 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 05 22:58:41 addons-554168 crio[967]: time="2024-08-05 22:58:41.427187480Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=64a473fb-9711-4d18-ab10-eb021ccb6df9 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 22:58:41 addons-554168 crio[967]: time="2024-08-05 22:58:41.427424360Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=64a473fb-9711-4d18-ab10-eb021ccb6df9 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 22:58:53 addons-554168 crio[967]: time="2024-08-05 22:58:53.426314720Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=28028efa-8c9a-4f95-aa6c-66c5d7bd4b25 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 22:58:53 addons-554168 crio[967]: time="2024-08-05 22:58:53.426550854Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=28028efa-8c9a-4f95-aa6c-66c5d7bd4b25 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 22:58:53 addons-554168 crio[967]: time="2024-08-05 22:58:53.427636653Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=cb05e1fb-bef8-4141-8e32-0995fc52d7d9 name=/runtime.v1.ImageService/PullImage
	Aug 05 22:58:53 addons-554168 crio[967]: time="2024-08-05 22:58:53.430973913Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Aug 05 22:59:38 addons-554168 crio[967]: time="2024-08-05 22:59:38.426763041Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=faed3680-0ee6-4d97-831f-c428420fc47f name=/runtime.v1.ImageService/ImageStatus
	Aug 05 22:59:38 addons-554168 crio[967]: time="2024-08-05 22:59:38.426996048Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=faed3680-0ee6-4d97-831f-c428420fc47f name=/runtime.v1.ImageService/ImageStatus
	Aug 05 22:59:53 addons-554168 crio[967]: time="2024-08-05 22:59:53.426086121Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b310e765-af9d-4580-9fe3-2ece227532f5 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 22:59:53 addons-554168 crio[967]: time="2024-08-05 22:59:53.426358192Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=b310e765-af9d-4580-9fe3-2ece227532f5 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:00:02 addons-554168 crio[967]: time="2024-08-05 23:00:02.572156712Z" level=info msg="Stopping container: 441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b (timeout: 30s)" id=b9a35587-e483-4826-9dc2-ecd78e964eed name=/runtime.v1.RuntimeService/StopContainer
	Aug 05 23:00:03 addons-554168 crio[967]: time="2024-08-05 23:00:03.750792061Z" level=info msg="Stopped container 441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b: kube-system/metrics-server-c59844bb4-4dgqd/metrics-server" id=b9a35587-e483-4826-9dc2-ecd78e964eed name=/runtime.v1.RuntimeService/StopContainer
	Aug 05 23:00:03 addons-554168 crio[967]: time="2024-08-05 23:00:03.751309267Z" level=info msg="Stopping pod sandbox: f926f727e2a7d612942c56150bf2fef607841642c9ea1aca3015278240f99ddc" id=61daab00-3418-4510-984c-dd0886848e7b name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 05 23:00:03 addons-554168 crio[967]: time="2024-08-05 23:00:03.751542209Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-4dgqd Namespace:kube-system ID:f926f727e2a7d612942c56150bf2fef607841642c9ea1aca3015278240f99ddc UID:87a4cfae-8eae-4755-8efe-9e869f5ea69e NetNS:/var/run/netns/1e6a563f-3b2d-4f58-bb63-623ee95f4a32 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 05 23:00:03 addons-554168 crio[967]: time="2024-08-05 23:00:03.751689276Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-4dgqd from CNI network \"kindnet\" (type=ptp)"
	Aug 05 23:00:03 addons-554168 crio[967]: time="2024-08-05 23:00:03.799277862Z" level=info msg="Stopped pod sandbox: f926f727e2a7d612942c56150bf2fef607841642c9ea1aca3015278240f99ddc" id=61daab00-3418-4510-984c-dd0886848e7b name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	80848b7d83236       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   340c9d53081ff       nginx
	0fc837ff7f7da       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                   5 minutes ago       Running             headlamp                  0                   9ddbfad4c53c8       headlamp-9d868696f-xxnxt
	6f5a67fac3455       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     7 minutes ago       Running             busybox                   0                   d46f0383b9df2       busybox
	441fa6b6eb00a       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   f926f727e2a7d       metrics-server-c59844bb4-4dgqd
	09cd48a169823       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   35e32eabd0e24       coredns-7db6d8ff4d-prz4h
	dba64b3f42ca1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   2be7b67b90ad9       storage-provisioner
	b2d8829265bce       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3                      9 minutes ago       Running             kindnet-cni               0                   4eca3dce0cb19       kindnet-jtck6
	42d10724c43ba       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                        9 minutes ago       Running             kube-proxy                0                   4025494cb8316       kube-proxy-lp29n
	0371ec481c801       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                        10 minutes ago      Running             kube-scheduler            0                   2436715e9647f       kube-scheduler-addons-554168
	8503af1c18aed       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        10 minutes ago      Running             etcd                      0                   c96411bbc36fb       etcd-addons-554168
	9046f4aa4a92a       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                        10 minutes ago      Running             kube-apiserver            0                   afa62b12bb725       kube-apiserver-addons-554168
	63e045f9a9fa9       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                        10 minutes ago      Running             kube-controller-manager   0                   e4615108cd183       kube-controller-manager-addons-554168
	
	
	==> coredns [09cd48a169823cec1bbf6d15166f6f5b63a8c5d4ad33035e89a866cc5c65a080] <==
	[INFO] 10.244.0.14:37326 - 44980 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002181377s
	[INFO] 10.244.0.14:35169 - 22707 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000110021s
	[INFO] 10.244.0.14:35169 - 46514 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000097944s
	[INFO] 10.244.0.14:49280 - 1843 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122706s
	[INFO] 10.244.0.14:49280 - 33079 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048221s
	[INFO] 10.244.0.14:54000 - 2022 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083338s
	[INFO] 10.244.0.14:54000 - 45792 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000436861s
	[INFO] 10.244.0.14:57547 - 53254 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107166s
	[INFO] 10.244.0.14:57547 - 4352 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000227723s
	[INFO] 10.244.0.14:52631 - 36119 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001578511s
	[INFO] 10.244.0.14:52631 - 31768 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001663024s
	[INFO] 10.244.0.14:51829 - 15254 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000131757s
	[INFO] 10.244.0.14:51829 - 62353 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169155s
	[INFO] 10.244.0.19:48977 - 27056 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004180922s
	[INFO] 10.244.0.19:46068 - 29605 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004129722s
	[INFO] 10.244.0.19:59931 - 21042 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166734s
	[INFO] 10.244.0.19:35098 - 12053 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001066s
	[INFO] 10.244.0.19:39743 - 22944 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121664s
	[INFO] 10.244.0.19:37699 - 31205 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120762s
	[INFO] 10.244.0.19:45400 - 41622 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003696152s
	[INFO] 10.244.0.19:59312 - 10551 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003016445s
	[INFO] 10.244.0.19:58918 - 55901 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.00091638s
	[INFO] 10.244.0.19:53229 - 14043 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001047259s
	[INFO] 10.244.0.22:59766 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000219493s
	[INFO] 10.244.0.22:43408 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127539s
	
	
	==> describe nodes <==
	Name:               addons-554168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-554168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=addons-554168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T22_50_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-554168
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 22:50:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-554168
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 22:59:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 22:55:13 +0000   Mon, 05 Aug 2024 22:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 22:55:13 +0000   Mon, 05 Aug 2024 22:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 22:55:13 +0000   Mon, 05 Aug 2024 22:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 22:55:13 +0000   Mon, 05 Aug 2024 22:51:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-554168
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6ac9b7fb0e1449fb7e688e34a1cf693
	  System UUID:                9c3ced50-cdba-4701-9230-5543127749e7
	  Boot ID:                    ab3fa9fd-00f6-443b-af0d-60e87e17630c
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  default                     hello-world-app-6778b5fc9f-lmjj4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  headlamp                    headlamp-9d868696f-xxnxt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 coredns-7db6d8ff4d-prz4h                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m44s
	  kube-system                 etcd-addons-554168                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m57s
	  kube-system                 kindnet-jtck6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-addons-554168             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-controller-manager-addons-554168    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-proxy-lp29n                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-addons-554168             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m36s              kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-554168 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-554168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node addons-554168 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m57s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m57s              kubelet          Node addons-554168 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m57s              kubelet          Node addons-554168 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m57s              kubelet          Node addons-554168 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m45s              node-controller  Node addons-554168 event: Registered Node addons-554168 in Controller
	  Normal  NodeReady                8m56s              kubelet          Node addons-554168 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000670] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000862] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=000000008997b551
	[  +0.001025] FS-Cache: N-key=[8] 'e8633b0000000000'
	[  +0.003877] FS-Cache: Duplicate cookie detected
	[  +0.000695] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000909] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000c495d5fa
	[  +0.000976] FS-Cache: O-key=[8] 'e8633b0000000000'
	[  +0.000655] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000881] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=00000000c84903e3
	[  +0.000991] FS-Cache: N-key=[8] 'e8633b0000000000'
	[  +2.077764] FS-Cache: Duplicate cookie detected
	[  +0.000839] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000c4c8673a
	[  +0.001004] FS-Cache: O-key=[8] 'e5633b0000000000'
	[  +0.000662] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000868] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=00000000b02f196c
	[  +0.001016] FS-Cache: N-key=[8] 'e5633b0000000000'
	[  +0.396957] FS-Cache: Duplicate cookie detected
	[  +0.000666] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000938] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000d829204a
	[  +0.001050] FS-Cache: O-key=[8] 'ed633b0000000000'
	[  +0.000691] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000884] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=000000008997b551
	[  +0.000977] FS-Cache: N-key=[8] 'ed633b0000000000'
	[Aug 5 21:59] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [8503af1c18aed87e5e18da074953b29f3f803857ff65ccfa8f9de83024609778] <==
	{"level":"info","ts":"2024-08-05T22:50:00.248929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T22:50:00.248947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-05T22:50:00.248969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T22:50:00.248978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-05T22:50:00.248995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-05T22:50:00.249005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-05T22:50:00.252927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T22:50:00.252908Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-554168 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T22:50:00.254395Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T22:50:00.254431Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:50:00.254845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T22:50:00.254908Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T22:50:00.265474Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-05T22:50:00.266634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:50:00.26756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:50:00.267751Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T22:50:00.272597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T22:50:24.067518Z","caller":"traceutil/trace.go:171","msg":"trace[2023474310] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"144.835509ms","start":"2024-08-05T22:50:23.922665Z","end":"2024-08-05T22:50:24.067501Z","steps":["trace[2023474310] 'process raft request'  (duration: 97.665994ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:50:24.067894Z","caller":"traceutil/trace.go:171","msg":"trace[1869494953] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"145.082579ms","start":"2024-08-05T22:50:23.922801Z","end":"2024-08-05T22:50:24.067884Z","steps":["trace[1869494953] 'process raft request'  (duration: 106.277102ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:50:24.663999Z","caller":"traceutil/trace.go:171","msg":"trace[1964051575] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"110.391315ms","start":"2024-08-05T22:50:24.553586Z","end":"2024-08-05T22:50:24.663977Z","steps":["trace[1964051575] 'process raft request'  (duration: 109.943156ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:50:24.664216Z","caller":"traceutil/trace.go:171","msg":"trace[1141828130] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"110.561414ms","start":"2024-08-05T22:50:24.553647Z","end":"2024-08-05T22:50:24.664208Z","steps":["trace[1141828130] 'process raft request'  (duration: 109.97783ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:50:24.664363Z","caller":"traceutil/trace.go:171","msg":"trace[173760013] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"110.671993ms","start":"2024-08-05T22:50:24.553684Z","end":"2024-08-05T22:50:24.664356Z","steps":["trace[173760013] 'process raft request'  (duration: 109.984304ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:00:02.71372Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1989}
	{"level":"info","ts":"2024-08-05T23:00:02.762118Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1989,"took":"46.469096ms","hash":3991758625,"current-db-size-bytes":8839168,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":5672960,"current-db-size-in-use":"5.7 MB"}
	{"level":"info","ts":"2024-08-05T23:00:02.762175Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3991758625,"revision":1989,"compact-revision":-1}
	
	
	==> kernel <==
	 23:00:04 up  7:42,  0 users,  load average: 0.27, 0.64, 0.85
	Linux addons-554168 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b2d8829265bcee1071e823ffa1fe2cfbc5ecac5e16096fac9e0c2e2ed1224a86] <==
	I0805 22:58:47.578942       1 main.go:299] handling current node
	W0805 22:58:52.419166       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 22:58:52.419198       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0805 22:58:57.577381       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:58:57.577419       1 main.go:299] handling current node
	W0805 22:58:59.480506       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0805 22:58:59.480542       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0805 22:59:07.577995       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:59:07.578034       1 main.go:299] handling current node
	I0805 22:59:17.577300       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:59:17.577335       1 main.go:299] handling current node
	W0805 22:59:24.826305       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0805 22:59:24.826346       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0805 22:59:27.578018       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:59:27.578058       1 main.go:299] handling current node
	I0805 22:59:37.577830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:59:37.577866       1 main.go:299] handling current node
	W0805 22:59:41.609845       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 22:59:41.609881       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0805 22:59:47.577351       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:59:47.577385       1 main.go:299] handling current node
	W0805 22:59:47.810389       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0805 22:59:47.810427       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0805 22:59:57.577922       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 22:59:57.577958       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9046f4aa4a92af240549ceb506bd143c163a66fee8b1447341c1ad1580589a19] <==
	I0805 22:52:19.076337       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0805 22:53:03.091258       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54878: use of closed network connection
	I0805 22:53:50.222276       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0805 22:54:03.471417       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0805 22:54:23.108939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.109081       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:23.131692       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.131970       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:23.155005       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.155136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:23.172158       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.172317       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:23.229268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:23.229410       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0805 22:54:24.156353       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0805 22:54:24.230448       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0805 22:54:24.273760       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0805 22:54:30.969727       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.50.72"}
	E0805 22:54:31.117384       1 watch.go:250] http2: stream closed
	I0805 22:54:47.815648       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0805 22:54:48.847988       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0805 22:54:53.379722       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0805 22:54:53.690182       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.88.147"}
	I0805 22:57:14.298689       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.137.130"}
	E0805 22:57:15.734932       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [63e045f9a9fa9dab892afea51685c723d2b72b811191ea883cc6c2790e22bcef] <==
	W0805 22:58:39.067289       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:58:39.067437       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 22:58:41.441322       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="66.526µs"
	W0805 22:58:44.817789       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:58:44.817907       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 22:58:53.442151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="65.272µs"
	W0805 22:59:03.404293       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:03.404693       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:59:05.702338       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:05.702378       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:59:13.179671       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:13.179713       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:59:16.920894       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:16.920931       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 22:59:38.439928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="46.925µs"
	I0805 22:59:53.437423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="45.185µs"
	W0805 22:59:57.508453       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:57.508489       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:59:59.755064       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:59.755104       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 23:00:01.677155       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 23:00:01.677290       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 23:00:02.025489       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 23:00:02.025534       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 23:00:02.549076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="4.849µs"
	
	
	==> kube-proxy [42d10724c43bac40d8d576243246016cf3059594c744fb6f1287236199ade7bb] <==
	I0805 22:50:27.509970       1 server_linux.go:69] "Using iptables proxy"
	I0805 22:50:27.711102       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0805 22:50:28.030605       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0805 22:50:28.030675       1 server_linux.go:165] "Using iptables Proxier"
	I0805 22:50:28.056078       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0805 22:50:28.056108       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0805 22:50:28.056135       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 22:50:28.056356       1 server.go:872] "Version info" version="v1.30.3"
	I0805 22:50:28.056379       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 22:50:28.058296       1 config.go:192] "Starting service config controller"
	I0805 22:50:28.058323       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 22:50:28.058362       1 config.go:101] "Starting endpoint slice config controller"
	I0805 22:50:28.058372       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 22:50:28.058713       1 config.go:319] "Starting node config controller"
	I0805 22:50:28.058731       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 22:50:28.158863       1 shared_informer.go:320] Caches are synced for node config
	I0805 22:50:28.164183       1 shared_informer.go:320] Caches are synced for service config
	I0805 22:50:28.164207       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0371ec481c8014d10504d60bcbb41dd38abb5ddc983b761a4f07dc04a3500fde] <==
	W0805 22:50:04.740019       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 22:50:04.740034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 22:50:04.740071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 22:50:04.740084       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 22:50:04.740117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:04.740133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:04.743872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 22:50:04.744546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 22:50:04.744721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 22:50:04.744999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 22:50:04.744830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 22:50:04.745097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 22:50:04.744964       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 22:50:04.745160       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 22:50:05.609666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 22:50:05.609814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 22:50:05.644540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:05.644597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:05.649343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 22:50:05.649452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 22:50:05.882909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 22:50:05.882952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 22:50:05.889969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 22:50:05.890014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0805 22:50:06.331288       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 22:57:46 addons-554168 kubelet[1564]: I0805 22:57:46.426527    1564 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 05 22:58:07 addons-554168 kubelet[1564]: I0805 22:58:07.993301    1564 scope.go:117] "RemoveContainer" containerID="b16de6434aef885ddf571a21831c2669ef07656e8bc1bcc0d76d8e7ae80bd283"
	Aug 05 22:58:08 addons-554168 kubelet[1564]: I0805 22:58:08.019231    1564 scope.go:117] "RemoveContainer" containerID="daaa1393fe81be19ed3a897842c74a70297d9366fca35ef4c5bd8d562b6c4494"
	Aug 05 22:58:27 addons-554168 kubelet[1564]: E0805 22:58:27.749651    1564 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Aug 05 22:58:27 addons-554168 kubelet[1564]: E0805 22:58:27.749710    1564 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Aug 05 22:58:27 addons-554168 kubelet[1564]: E0805 22:58:27.749798    1564 kuberuntime_manager.go:1256] container &Container{Name:hello-world-app,Image:docker.io/kicbase/echo-server:1.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9d8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod hello-world-
app-6778b5fc9f-lmjj4_default(bf5d27cc-5d30-4c9c-9d1c-74434caa04e9): ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 22:58:27 addons-554168 kubelet[1564]: E0805 22:58:27.749837    1564 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-6778b5fc9f-lmjj4" podUID="bf5d27cc-5d30-4c9c-9d1c-74434caa04e9"
	Aug 05 22:58:41 addons-554168 kubelet[1564]: E0805 22:58:41.427789    1564 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\"\"" pod="default/hello-world-app-6778b5fc9f-lmjj4" podUID="bf5d27cc-5d30-4c9c-9d1c-74434caa04e9"
	Aug 05 22:59:09 addons-554168 kubelet[1564]: I0805 22:59:09.426219    1564 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 05 22:59:23 addons-554168 kubelet[1564]: E0805 22:59:23.766623    1564 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Aug 05 22:59:23 addons-554168 kubelet[1564]: E0805 22:59:23.766689    1564 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Aug 05 22:59:23 addons-554168 kubelet[1564]: E0805 22:59:23.766788    1564 kuberuntime_manager.go:1256] container &Container{Name:hello-world-app,Image:docker.io/kicbase/echo-server:1.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9d8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod hello-world-
app-6778b5fc9f-lmjj4_default(bf5d27cc-5d30-4c9c-9d1c-74434caa04e9): ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 22:59:23 addons-554168 kubelet[1564]: E0805 22:59:23.766817    1564 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-6778b5fc9f-lmjj4" podUID="bf5d27cc-5d30-4c9c-9d1c-74434caa04e9"
	Aug 05 22:59:38 addons-554168 kubelet[1564]: E0805 22:59:38.427393    1564 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\"\"" pod="default/hello-world-app-6778b5fc9f-lmjj4" podUID="bf5d27cc-5d30-4c9c-9d1c-74434caa04e9"
	Aug 05 22:59:53 addons-554168 kubelet[1564]: E0805 22:59:53.427158    1564 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\"\"" pod="default/hello-world-app-6778b5fc9f-lmjj4" podUID="bf5d27cc-5d30-4c9c-9d1c-74434caa04e9"
	Aug 05 23:00:03 addons-554168 kubelet[1564]: I0805 23:00:03.820658    1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/87a4cfae-8eae-4755-8efe-9e869f5ea69e-tmp-dir\") pod \"87a4cfae-8eae-4755-8efe-9e869f5ea69e\" (UID: \"87a4cfae-8eae-4755-8efe-9e869f5ea69e\") "
	Aug 05 23:00:03 addons-554168 kubelet[1564]: I0805 23:00:03.820741    1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf6d7\" (UniqueName: \"kubernetes.io/projected/87a4cfae-8eae-4755-8efe-9e869f5ea69e-kube-api-access-mf6d7\") pod \"87a4cfae-8eae-4755-8efe-9e869f5ea69e\" (UID: \"87a4cfae-8eae-4755-8efe-9e869f5ea69e\") "
	Aug 05 23:00:03 addons-554168 kubelet[1564]: I0805 23:00:03.821161    1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87a4cfae-8eae-4755-8efe-9e869f5ea69e-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "87a4cfae-8eae-4755-8efe-9e869f5ea69e" (UID: "87a4cfae-8eae-4755-8efe-9e869f5ea69e"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 05 23:00:03 addons-554168 kubelet[1564]: I0805 23:00:03.823837    1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a4cfae-8eae-4755-8efe-9e869f5ea69e-kube-api-access-mf6d7" (OuterVolumeSpecName: "kube-api-access-mf6d7") pod "87a4cfae-8eae-4755-8efe-9e869f5ea69e" (UID: "87a4cfae-8eae-4755-8efe-9e869f5ea69e"). InnerVolumeSpecName "kube-api-access-mf6d7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 23:00:03 addons-554168 kubelet[1564]: I0805 23:00:03.920966    1564 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/87a4cfae-8eae-4755-8efe-9e869f5ea69e-tmp-dir\") on node \"addons-554168\" DevicePath \"\""
	Aug 05 23:00:03 addons-554168 kubelet[1564]: I0805 23:00:03.921001    1564 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mf6d7\" (UniqueName: \"kubernetes.io/projected/87a4cfae-8eae-4755-8efe-9e869f5ea69e-kube-api-access-mf6d7\") on node \"addons-554168\" DevicePath \"\""
	Aug 05 23:00:04 addons-554168 kubelet[1564]: I0805 23:00:04.098295    1564 scope.go:117] "RemoveContainer" containerID="441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b"
	Aug 05 23:00:04 addons-554168 kubelet[1564]: I0805 23:00:04.134363    1564 scope.go:117] "RemoveContainer" containerID="441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b"
	Aug 05 23:00:04 addons-554168 kubelet[1564]: E0805 23:00:04.134745    1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b\": container with ID starting with 441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b not found: ID does not exist" containerID="441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b"
	Aug 05 23:00:04 addons-554168 kubelet[1564]: I0805 23:00:04.134789    1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b"} err="failed to get container status \"441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b\": rpc error: code = NotFound desc = could not find container \"441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b\": container with ID starting with 441fa6b6eb00abfe87da32e436082b42664549657c2843fa99f0a69e1790ce0b not found: ID does not exist"
	
	
	==> storage-provisioner [dba64b3f42ca10b70dc9271763bb155e7685614cb272a5f1577758aab31ea154] <==
	I0805 22:51:09.110258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 22:51:09.122201       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 22:51:09.122357       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 22:51:09.133153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 22:51:09.133641       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6a17fc7-bc49-4853-84f4-93a994633eae", APIVersion:"v1", ResourceVersion:"935", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-554168_2934d922-49fa-44aa-8552-61be334d079d became leader
	I0805 22:51:09.133747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-554168_2934d922-49fa-44aa-8552-61be334d079d!
	I0805 22:51:09.234363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-554168_2934d922-49fa-44aa-8552-61be334d079d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-554168 -n addons-554168
helpers_test.go:261: (dbg) Run:  kubectl --context addons-554168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-6778b5fc9f-lmjj4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-554168 describe pod hello-world-app-6778b5fc9f-lmjj4
helpers_test.go:282: (dbg) kubectl --context addons-554168 describe pod hello-world-app-6778b5fc9f-lmjj4:

                                                
                                                
-- stdout --
	Name:             hello-world-app-6778b5fc9f-lmjj4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-554168/192.168.49.2
	Start Time:       Mon, 05 Aug 2024 22:57:14 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=6778b5fc9f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:           10.244.0.30
	Controlled By:  ReplicaSet/hello-world-app-6778b5fc9f
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9d8f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9d8f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m51s                default-scheduler  Successfully assigned default/hello-world-app-6778b5fc9f-lmjj4 to addons-554168
	  Warning  Failed     2m20s                kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": initializing source docker://kicbase/echo-server:1.0: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    72s (x3 over 2m51s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     42s (x3 over 2m20s)  kubelet            Error: ErrImagePull
	  Warning  Failed     42s (x2 over 98s)    kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    12s (x4 over 2m20s)  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     12s (x4 over 2m20s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (334.31s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cecc1171-d50f-4849-9e79-0df5a085ff0c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004368242s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-220049 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-220049 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-220049 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-220049 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cbb1dba6-11a9-4893-b4b1-a1d5af81920b] Pending
helpers_test.go:344: "sp-pod" [cbb1dba6-11a9-4893-b4b1-a1d5af81920b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0805 23:04:15.818147 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:05:37.739004 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-220049 -n functional-220049
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-08-05 23:06:47.933032635 +0000 UTC m=+1088.044801467
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-220049 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-220049 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-220049/192.168.49.2
Start Time:       Mon, 05 Aug 2024 23:03:47 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-497kt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-497kt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-220049
Warning  Failed     2m11s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     40s (x2 over 2m11s)  kubelet            Error: ErrImagePull
Warning  Failed     40s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    27s (x2 over 2m10s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     27s (x2 over 2m10s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    14s (x3 over 3m1s)   kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-220049 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-220049 logs sp-pod -n default: exit status 1 (104.121582ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-220049 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-220049
helpers_test.go:235: (dbg) docker inspect functional-220049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ab224fe3b49cfb2329e71c7331930e87ac0d43517dc49d5b96e1c0b05827f652",
	        "Created": "2024-08-05T23:01:11.3235211Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1582719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-05T23:01:11.460090595Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/ab224fe3b49cfb2329e71c7331930e87ac0d43517dc49d5b96e1c0b05827f652/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ab224fe3b49cfb2329e71c7331930e87ac0d43517dc49d5b96e1c0b05827f652/hostname",
	        "HostsPath": "/var/lib/docker/containers/ab224fe3b49cfb2329e71c7331930e87ac0d43517dc49d5b96e1c0b05827f652/hosts",
	        "LogPath": "/var/lib/docker/containers/ab224fe3b49cfb2329e71c7331930e87ac0d43517dc49d5b96e1c0b05827f652/ab224fe3b49cfb2329e71c7331930e87ac0d43517dc49d5b96e1c0b05827f652-json.log",
	        "Name": "/functional-220049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-220049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-220049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/664d29d1aef478ef8eceac77d8c369c09cfb7371fe060accd93c2334a342f039-init/diff:/var/lib/docker/overlay2/86ccb695426d1801c241efb9fd4274cb7838d591a3ef1deb45fd2daef819089e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/664d29d1aef478ef8eceac77d8c369c09cfb7371fe060accd93c2334a342f039/merged",
	                "UpperDir": "/var/lib/docker/overlay2/664d29d1aef478ef8eceac77d8c369c09cfb7371fe060accd93c2334a342f039/diff",
	                "WorkDir": "/var/lib/docker/overlay2/664d29d1aef478ef8eceac77d8c369c09cfb7371fe060accd93c2334a342f039/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-220049",
	                "Source": "/var/lib/docker/volumes/functional-220049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-220049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-220049",
	                "name.minikube.sigs.k8s.io": "functional-220049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4800b4f816e9a22939b6205aec8a5c0631059d565640aa42b3bfa2c1a14b344d",
	            "SandboxKey": "/var/run/docker/netns/4800b4f816e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34647"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34648"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34651"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34649"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34650"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-220049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e25455822b60625f2b50d0dc8565967766093130d277d67ed6f9b47eb601f341",
	                    "EndpointID": "560a5414abe686194b61cc5a5e063fc9cf1152ea044e1730521c749b54c4a467",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-220049",
	                        "ab224fe3b49c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-220049 -n functional-220049
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 logs -n 25: (1.69637207s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                 Args                                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-220049 ssh sudo cat                                       | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | /etc/ssl/certs/15651212.pem                                          |                   |         |         |                     |                     |
	| ssh     | functional-220049 ssh sudo cat                                       | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | /usr/share/ca-certificates/15651212.pem                              |                   |         |         |                     |                     |
	| image   | functional-220049 image ls                                           | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	| ssh     | functional-220049 ssh sudo cat                                       | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | /etc/ssl/certs/3ec20f2e.0                                            |                   |         |         |                     |                     |
	| image   | functional-220049 image load --daemon                                | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | docker.io/kicbase/echo-server:functional-220049                      |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh     | functional-220049 ssh sudo cat                                       | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | /etc/test/nested/copy/1565121/hosts                                  |                   |         |         |                     |                     |
	| image   | functional-220049 image ls                                           | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	| image   | functional-220049 image load --daemon                                | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | docker.io/kicbase/echo-server:functional-220049                      |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image   | functional-220049 image ls                                           | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	| image   | functional-220049 image save                                         | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | docker.io/kicbase/echo-server:functional-220049                      |                   |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image   | functional-220049 image rm                                           | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | docker.io/kicbase/echo-server:functional-220049                      |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image   | functional-220049 image ls                                           | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	| image   | functional-220049 image load                                         | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image   | functional-220049 image ls                                           | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	| image   | functional-220049 image save --daemon                                | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | docker.io/kicbase/echo-server:functional-220049                      |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh     | functional-220049 ssh echo                                           | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | hello                                                                |                   |         |         |                     |                     |
	| ssh     | functional-220049 ssh cat                                            | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | /etc/hostname                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-220049 tunnel                                             | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-220049 tunnel                                             | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-220049 tunnel                                             | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC |                     |
	|         | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| service | functional-220049 service list                                       | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	| service | functional-220049 service list                                       | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | -o json                                                              |                   |         |         |                     |                     |
	| service | functional-220049 service                                            | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | --namespace=default --https                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                     |                   |         |         |                     |                     |
	| service | functional-220049                                                    | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | service hello-node --url                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                     |                   |         |         |                     |                     |
	| service | functional-220049 service                                            | functional-220049 | jenkins | v1.33.1 | 05 Aug 24 23:03 UTC | 05 Aug 24 23:03 UTC |
	|         | hello-node --url                                                     |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 23:02:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 23:02:38.198804 1587477 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:02:38.198922 1587477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:02:38.198926 1587477 out.go:304] Setting ErrFile to fd 2...
	I0805 23:02:38.198929 1587477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:02:38.199165 1587477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 23:02:38.199509 1587477 out.go:298] Setting JSON to false
	I0805 23:02:38.200414 1587477 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27898,"bootTime":1722871060,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 23:02:38.200476 1587477 start.go:139] virtualization:  
	I0805 23:02:38.203874 1587477 out.go:177] * [functional-220049] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 23:02:38.207262 1587477 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:02:38.207331 1587477 notify.go:220] Checking for updates...
	I0805 23:02:38.212517 1587477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:02:38.215136 1587477 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 23:02:38.217652 1587477 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 23:02:38.220310 1587477 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 23:02:38.222905 1587477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:02:38.226002 1587477 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:02:38.226093 1587477 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:02:38.261946 1587477 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 23:02:38.262068 1587477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 23:02:38.320073 1587477 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:64 SystemTime:2024-08-05 23:02:38.31018519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 23:02:38.320174 1587477 docker.go:307] overlay module found
	I0805 23:02:38.324788 1587477 out.go:177] * Using the docker driver based on existing profile
	I0805 23:02:38.327484 1587477 start.go:297] selected driver: docker
	I0805 23:02:38.327509 1587477 start.go:901] validating driver "docker" against &{Name:functional-220049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-220049 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:02:38.327626 1587477 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:02:38.327727 1587477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 23:02:38.382127 1587477 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:64 SystemTime:2024-08-05 23:02:38.37269264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 23:02:38.382557 1587477 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:02:38.382573 1587477 cni.go:84] Creating CNI manager for ""
	I0805 23:02:38.382579 1587477 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 23:02:38.382632 1587477 start.go:340] cluster config:
	{Name:functional-220049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-220049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:02:38.385867 1587477 out.go:177] * Starting "functional-220049" primary control-plane node in "functional-220049" cluster
	I0805 23:02:38.388494 1587477 cache.go:121] Beginning downloading kic base image for docker with crio
	I0805 23:02:38.391251 1587477 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0805 23:02:38.394115 1587477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:02:38.394165 1587477 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0805 23:02:38.394173 1587477 cache.go:56] Caching tarball of preloaded images
	I0805 23:02:38.394263 1587477 preload.go:172] Found /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0805 23:02:38.394272 1587477 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:02:38.394374 1587477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/config.json ...
	I0805 23:02:38.394606 1587477 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	W0805 23:02:38.414244 1587477 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0805 23:02:38.414254 1587477 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 23:02:38.414356 1587477 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 23:02:38.414383 1587477 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0805 23:02:38.414387 1587477 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0805 23:02:38.414394 1587477 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0805 23:02:38.414399 1587477 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0805 23:02:38.548751 1587477 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0805 23:02:38.548807 1587477 cache.go:194] Successfully downloaded all kic artifacts
	I0805 23:02:38.548836 1587477 start.go:360] acquireMachinesLock for functional-220049: {Name:mk7b71e4ad571603e9c0a4183afccc07bd345615 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:02:38.548908 1587477 start.go:364] duration metric: took 53.103µs to acquireMachinesLock for "functional-220049"
	I0805 23:02:38.548928 1587477 start.go:96] Skipping create...Using existing machine configuration
	I0805 23:02:38.548943 1587477 fix.go:54] fixHost starting: 
	I0805 23:02:38.549216 1587477 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
	I0805 23:02:38.564710 1587477 fix.go:112] recreateIfNeeded on functional-220049: state=Running err=<nil>
	W0805 23:02:38.564734 1587477 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 23:02:38.567767 1587477 out.go:177] * Updating the running docker "functional-220049" container ...
	I0805 23:02:38.570008 1587477 machine.go:94] provisionDockerMachine start ...
	I0805 23:02:38.570122 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:38.587485 1587477 main.go:141] libmachine: Using SSH client type: native
	I0805 23:02:38.587747 1587477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34647 <nil> <nil>}
	I0805 23:02:38.587754 1587477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 23:02:38.720289 1587477 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-220049
	
	I0805 23:02:38.720311 1587477 ubuntu.go:169] provisioning hostname "functional-220049"
	I0805 23:02:38.720372 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:38.739471 1587477 main.go:141] libmachine: Using SSH client type: native
	I0805 23:02:38.739751 1587477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34647 <nil> <nil>}
	I0805 23:02:38.739759 1587477 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-220049 && echo "functional-220049" | sudo tee /etc/hostname
	I0805 23:02:38.884631 1587477 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-220049
	
	I0805 23:02:38.884698 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:38.901441 1587477 main.go:141] libmachine: Using SSH client type: native
	I0805 23:02:38.901697 1587477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34647 <nil> <nil>}
	I0805 23:02:38.901711 1587477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-220049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-220049/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-220049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:02:39.037411 1587477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:02:39.037441 1587477 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19373-1559727/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-1559727/.minikube}
	I0805 23:02:39.037475 1587477 ubuntu.go:177] setting up certificates
	I0805 23:02:39.037511 1587477 provision.go:84] configureAuth start
	I0805 23:02:39.037648 1587477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-220049
	I0805 23:02:39.057710 1587477 provision.go:143] copyHostCerts
	I0805 23:02:39.057776 1587477 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.pem, removing ...
	I0805 23:02:39.057784 1587477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.pem
	I0805 23:02:39.057865 1587477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.pem (1078 bytes)
	I0805 23:02:39.057975 1587477 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-1559727/.minikube/cert.pem, removing ...
	I0805 23:02:39.057979 1587477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-1559727/.minikube/cert.pem
	I0805 23:02:39.058005 1587477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/cert.pem (1123 bytes)
	I0805 23:02:39.058058 1587477 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-1559727/.minikube/key.pem, removing ...
	I0805 23:02:39.058062 1587477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-1559727/.minikube/key.pem
	I0805 23:02:39.058085 1587477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-1559727/.minikube/key.pem (1679 bytes)
	I0805 23:02:39.058131 1587477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem org=jenkins.functional-220049 san=[127.0.0.1 192.168.49.2 functional-220049 localhost minikube]
	I0805 23:02:39.618209 1587477 provision.go:177] copyRemoteCerts
	I0805 23:02:39.618275 1587477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:02:39.618317 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:39.635438 1587477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
	I0805 23:02:39.738522 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 23:02:39.764919 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0805 23:02:39.790619 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:02:39.816033 1587477 provision.go:87] duration metric: took 778.509336ms to configureAuth
	I0805 23:02:39.816063 1587477 ubuntu.go:193] setting minikube options for container-runtime
	I0805 23:02:39.816261 1587477 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:02:39.816369 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:39.834377 1587477 main.go:141] libmachine: Using SSH client type: native
	I0805 23:02:39.834621 1587477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34647 <nil> <nil>}
	I0805 23:02:39.834633 1587477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:02:45.400502 1587477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:02:45.400519 1587477 machine.go:97] duration metric: took 6.830499662s to provisionDockerMachine
	I0805 23:02:45.400530 1587477 start.go:293] postStartSetup for "functional-220049" (driver="docker")
	I0805 23:02:45.400541 1587477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:02:45.400657 1587477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:02:45.400743 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:45.422305 1587477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
	I0805 23:02:45.518398 1587477 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:02:45.521833 1587477 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0805 23:02:45.521858 1587477 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0805 23:02:45.521867 1587477 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0805 23:02:45.521873 1587477 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0805 23:02:45.521883 1587477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-1559727/.minikube/addons for local assets ...
	I0805 23:02:45.521963 1587477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-1559727/.minikube/files for local assets ...
	I0805 23:02:45.522044 1587477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-1559727/.minikube/files/etc/ssl/certs/15651212.pem -> 15651212.pem in /etc/ssl/certs
	I0805 23:02:45.522119 1587477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-1559727/.minikube/files/etc/test/nested/copy/1565121/hosts -> hosts in /etc/test/nested/copy/1565121
	I0805 23:02:45.522174 1587477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1565121
	I0805 23:02:45.531272 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/files/etc/ssl/certs/15651212.pem --> /etc/ssl/certs/15651212.pem (1708 bytes)
	I0805 23:02:45.556288 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/files/etc/test/nested/copy/1565121/hosts --> /etc/test/nested/copy/1565121/hosts (40 bytes)
	I0805 23:02:45.582307 1587477 start.go:296] duration metric: took 181.761335ms for postStartSetup
	I0805 23:02:45.582382 1587477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:02:45.582473 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:45.599941 1587477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
	I0805 23:02:45.695083 1587477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0805 23:02:45.700362 1587477 fix.go:56] duration metric: took 7.151419389s for fixHost
	I0805 23:02:45.700378 1587477 start.go:83] releasing machines lock for "functional-220049", held for 7.151462211s
	I0805 23:02:45.700449 1587477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-220049
	I0805 23:02:45.717246 1587477 ssh_runner.go:195] Run: cat /version.json
	I0805 23:02:45.717295 1587477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:02:45.717302 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:45.717360 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:02:45.738446 1587477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
	I0805 23:02:45.746221 1587477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
	I0805 23:02:45.957043 1587477 ssh_runner.go:195] Run: systemctl --version
	I0805 23:02:45.962178 1587477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:02:46.107590 1587477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 23:02:46.112376 1587477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:02:46.121820 1587477 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0805 23:02:46.121890 1587477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:02:46.131117 1587477 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 23:02:46.131131 1587477 start.go:495] detecting cgroup driver to use...
	I0805 23:02:46.131167 1587477 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0805 23:02:46.131214 1587477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:02:46.143955 1587477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:02:46.156622 1587477 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:02:46.156680 1587477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:02:46.171282 1587477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:02:46.183988 1587477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:02:46.314265 1587477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:02:46.432068 1587477 docker.go:233] disabling docker service ...
	I0805 23:02:46.432136 1587477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:02:46.445275 1587477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:02:46.457883 1587477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:02:46.579903 1587477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:02:46.710755 1587477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:02:46.722144 1587477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:02:46.739809 1587477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:02:46.739865 1587477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:02:46.750926 1587477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:02:46.750991 1587477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:02:46.762114 1587477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:02:46.773093 1587477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:02:46.784173 1587477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:02:46.795328 1587477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:02:46.805280 1587477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:02:46.814939 1587477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:02:46.824703 1587477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:02:46.833606 1587477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:02:46.842569 1587477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:02:46.978374 1587477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:02:47.151527 1587477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:02:47.151600 1587477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:02:47.155391 1587477 start.go:563] Will wait 60s for crictl version
	I0805 23:02:47.155446 1587477 ssh_runner.go:195] Run: which crictl
	I0805 23:02:47.159318 1587477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:02:47.197633 1587477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0805 23:02:47.197717 1587477 ssh_runner.go:195] Run: crio --version
	I0805 23:02:47.235264 1587477 ssh_runner.go:195] Run: crio --version
	I0805 23:02:47.280115 1587477 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0805 23:02:47.286345 1587477 cli_runner.go:164] Run: docker network inspect functional-220049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0805 23:02:47.302953 1587477 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0805 23:02:47.309239 1587477 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0805 23:02:47.311576 1587477 kubeadm.go:883] updating cluster {Name:functional-220049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-220049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 23:02:47.311715 1587477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:02:47.311792 1587477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:02:47.356468 1587477 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:02:47.356479 1587477 crio.go:433] Images already preloaded, skipping extraction
	I0805 23:02:47.356536 1587477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:02:47.397362 1587477 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:02:47.397376 1587477 cache_images.go:84] Images are preloaded, skipping loading
	I0805 23:02:47.397382 1587477 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.30.3 crio true true} ...
	I0805 23:02:47.397487 1587477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-220049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-220049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:02:47.397569 1587477 ssh_runner.go:195] Run: crio config
	I0805 23:02:47.457100 1587477 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0805 23:02:47.457166 1587477 cni.go:84] Creating CNI manager for ""
	I0805 23:02:47.457173 1587477 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 23:02:47.457189 1587477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 23:02:47.457223 1587477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-220049 NodeName:functional-220049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 23:02:47.457457 1587477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-220049"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 23:02:47.457547 1587477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:02:47.468209 1587477 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 23:02:47.468272 1587477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 23:02:47.477270 1587477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0805 23:02:47.495748 1587477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:02:47.514612 1587477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2005 bytes)
	I0805 23:02:47.535238 1587477 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0805 23:02:47.538999 1587477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:02:47.658519 1587477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:02:47.671008 1587477 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049 for IP: 192.168.49.2
	I0805 23:02:47.671019 1587477 certs.go:194] generating shared ca certs ...
	I0805 23:02:47.671033 1587477 certs.go:226] acquiring lock for ca certs: {Name:mk45a3b9d27e38f3abe9128d73d1ec1f570fe6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:02:47.671188 1587477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key
	I0805 23:02:47.671232 1587477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key
	I0805 23:02:47.671238 1587477 certs.go:256] generating profile certs ...
	I0805 23:02:47.671327 1587477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.key
	I0805 23:02:47.671372 1587477 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/apiserver.key.56b3b65f
	I0805 23:02:47.671407 1587477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/proxy-client.key
	I0805 23:02:47.671516 1587477 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/1565121.pem (1338 bytes)
	W0805 23:02:47.671543 1587477 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/1565121_empty.pem, impossibly tiny 0 bytes
	I0805 23:02:47.671550 1587477 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:02:47.671577 1587477 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/ca.pem (1078 bytes)
	I0805 23:02:47.671601 1587477 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:02:47.671626 1587477 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/key.pem (1679 bytes)
	I0805 23:02:47.671672 1587477 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-1559727/.minikube/files/etc/ssl/certs/15651212.pem (1708 bytes)
	I0805 23:02:47.672364 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:02:47.699444 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:02:47.723527 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:02:47.748498 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:02:47.774640 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 23:02:47.799828 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 23:02:47.824562 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:02:47.849134 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 23:02:47.874994 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:02:47.899754 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/certs/1565121.pem --> /usr/share/ca-certificates/1565121.pem (1338 bytes)
	I0805 23:02:47.924186 1587477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-1559727/.minikube/files/etc/ssl/certs/15651212.pem --> /usr/share/ca-certificates/15651212.pem (1708 bytes)
	I0805 23:02:47.948681 1587477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 23:02:47.966694 1587477 ssh_runner.go:195] Run: openssl version
	I0805 23:02:47.972252 1587477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15651212.pem && ln -fs /usr/share/ca-certificates/15651212.pem /etc/ssl/certs/15651212.pem"
	I0805 23:02:47.982346 1587477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15651212.pem
	I0805 23:02:47.986317 1587477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:01 /usr/share/ca-certificates/15651212.pem
	I0805 23:02:47.986376 1587477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15651212.pem
	I0805 23:02:47.993710 1587477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15651212.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:02:48.006091 1587477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:02:48.020891 1587477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:02:48.025393 1587477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:49 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:02:48.025458 1587477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:02:48.033910 1587477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:02:48.044447 1587477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1565121.pem && ln -fs /usr/share/ca-certificates/1565121.pem /etc/ssl/certs/1565121.pem"
	I0805 23:02:48.055622 1587477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1565121.pem
	I0805 23:02:48.059827 1587477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:01 /usr/share/ca-certificates/1565121.pem
	I0805 23:02:48.059890 1587477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1565121.pem
	I0805 23:02:48.067962 1587477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1565121.pem /etc/ssl/certs/51391683.0"
	I0805 23:02:48.078118 1587477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:02:48.082028 1587477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 23:02:48.089309 1587477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 23:02:48.096513 1587477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 23:02:48.103895 1587477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 23:02:48.111309 1587477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 23:02:48.119725 1587477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 23:02:48.127672 1587477 kubeadm.go:392] StartCluster: {Name:functional-220049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-220049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:02:48.127757 1587477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 23:02:48.127835 1587477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 23:02:48.203750 1587477 cri.go:89] found id: "0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628"
	I0805 23:02:48.203763 1587477 cri.go:89] found id: "19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5"
	I0805 23:02:48.203767 1587477 cri.go:89] found id: "707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b"
	I0805 23:02:48.203771 1587477 cri.go:89] found id: "c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35"
	I0805 23:02:48.203787 1587477 cri.go:89] found id: "7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098"
	I0805 23:02:48.203818 1587477 cri.go:89] found id: "8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e"
	I0805 23:02:48.203820 1587477 cri.go:89] found id: "aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86"
	I0805 23:02:48.203822 1587477 cri.go:89] found id: "929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a"
	I0805 23:02:48.203824 1587477 cri.go:89] found id: "fa60efd6fa5823ba231c15b6de353ff595ffcf889e083ac86895d77cf1d09035"
	I0805 23:02:48.203829 1587477 cri.go:89] found id: "93deca5419831994620bb7a0785d500692ac95e06175fa2879993aefbde683d1"
	I0805 23:02:48.203831 1587477 cri.go:89] found id: "4c8298923d1d4e5327bebfb3a686e0fe461b09010f508da9024c6f4022fa095a"
	I0805 23:02:48.203834 1587477 cri.go:89] found id: "9de4124c5ba08b17c2617effe5baa8205f9e312a118e51870cf29d1aadedad29"
	I0805 23:02:48.203836 1587477 cri.go:89] found id: "d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847"
	I0805 23:02:48.203838 1587477 cri.go:89] found id: "5bf69c239567197420c226a7dddb2a9797769222b3073c263a535115ddd5ff1f"
	I0805 23:02:48.203842 1587477 cri.go:89] found id: "4a1c04f0063b4ca5cb7ec182c9f9208402d322fec96816517aeed97981d655c4"
	I0805 23:02:48.203844 1587477 cri.go:89] found id: ""
	I0805 23:02:48.203901 1587477 ssh_runner.go:195] Run: sudo runc list -f json
	I0805 23:02:48.246146 1587477 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628/userdata","rootfs":"/var/lib/containers/storage/overlay/69492e32e7ca3bd0913f391e569507ce05c1169c1bd842baa5b562d0ea8c6eca/merged","created":"2024-08-05T23:02:36.94917266Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d26f83ee","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d26f83ee\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termin
ationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:36.901674998Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cecc1171-d50f-4849-9e79-0df5a085ff0c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_cecc1171-d50f-4849-9e79-0df5a085ff0c/storage-provisioner/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provision
er\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/69492e32e7ca3bd0913f391e569507ce05c1169c1bd842baa5b562d0ea8c6eca/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_cecc1171-d50f-4849-9e79-0df5a085ff0c_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9dad4656063058190bc15b3bd3a057784245704f187d97f5259baf678dac016a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9dad4656063058190bc15b3bd3a057784245704f187d97f5259baf678dac016a","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_cecc1171-d50f-4849-9e79-0df5a085ff0c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/
kubelet/pods/cecc1171-d50f-4849-9e79-0df5a085ff0c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cecc1171-d50f-4849-9e79-0df5a085ff0c/containers/storage-provisioner/473f3348\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cecc1171-d50f-4849-9e79-0df5a085ff0c/volumes/kubernetes.io~projected/kube-api-access-5wt9z\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cecc1171-d50f-4849-9e79-0df5a085ff0c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-tes
t\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2024-08-05T23:02:04.377751704Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5/userdata","rootfs":"/var/lib/containers/storage/overlay/10f2e709dbfe488f367faf5248fede49a43efafa69c0c1109f2f6061a1bffd03/merged","created":"2024-08-05T23:02
:17.39558676Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"89a8ef20","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"89a8ef20\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/de
v/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:17.230768563Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.1","io.kubernetes.cri-o.ImageRef":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7db6d8ff4d-v6mkh\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0cd4cbba-7612-4bed-ab12-acad18367268\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7db6d8ff4d-v6mkh_0cd4cbba-7612-4bed-ab12-acad18367268/c
oredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/10f2e709dbfe488f367faf5248fede49a43efafa69c0c1109f2f6061a1bffd03/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7db6d8ff4d-v6mkh_kube-system_0cd4cbba-7612-4bed-ab12-acad18367268_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/714228be44cad0e5f396b40a4e2b2aacdab9b369f8ca173ebb839a434dfcb042/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"714228be44cad0e5f396b40a4e2b2aacdab9b369f8ca173ebb839a434dfcb042","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7db6d8ff4d-v6mkh_kube-system_0cd4cbba-7612-4bed-ab12-acad18367268_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/0cd4cbba-7612-4bed-ab12-acad18367268/vol
umes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0cd4cbba-7612-4bed-ab12-acad18367268/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0cd4cbba-7612-4bed-ab12-acad18367268/containers/coredns/65e10842\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/0cd4cbba-7612-4bed-ab12-acad18367268/volumes/kubernetes.io~projected/kube-api-access-mgbjr\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7db6d8ff4d-v6mkh","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0cd4cbba-7612-4bed-ab12-acad18367268","kubernetes.io/config.seen":"2024-08-05T23:
02:04.370866427Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4a1c04f0063b4ca5cb7ec182c9f9208402d322fec96816517aeed97981d655c4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4a1c04f0063b4ca5cb7ec182c9f9208402d322fec96816517aeed97981d655c4/userdata","rootfs":"/var/lib/containers/storage/overlay/312db1afbe7f750b43f936ce28849e7f9f8270b5995c80b13bc69b76634c4fea/merged","created":"2024-08-05T23:01:28.733778654Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7337c8d9","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7337c8d9\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4a1c04f0063b4ca5cb7ec182c9f9208402d322fec96816517aeed97981d655c4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:01:28.653959363Z","io.kubernetes.cri-o.Image":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.30.3","io.kubernetes.cri-o.ImageRef":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-220049\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e0dd1a64bac1e7f195bdc9e6d61e2ebf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-220049_e0dd1a64bac1e7f195bdc9e6d61e2ebf/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-
scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/312db1afbe7f750b43f936ce28849e7f9f8270b5995c80b13bc69b76634c4fea/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-220049_kube-system_e0dd1a64bac1e7f195bdc9e6d61e2ebf_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bb53fda2a614bd796df86e7cf5bb55c695b7d50dff6f99c504475c64d8f40314/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bb53fda2a614bd796df86e7cf5bb55c695b7d50dff6f99c504475c64d8f40314","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-220049_kube-system_e0dd1a64bac1e7f195bdc9e6d61e2ebf_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e0dd1a64bac1e7f195bdc9e6d61e2ebf/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relab
el\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e0dd1a64bac1e7f195bdc9e6d61e2ebf/containers/kube-scheduler/8bcfd74e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-220049","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e0dd1a64bac1e7f195bdc9e6d61e2ebf","kubernetes.io/config.hash":"e0dd1a64bac1e7f195bdc9e6d61e2ebf","kubernetes.io/config.seen":"2024-08-05T23:01:28.147035600Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bf69c239567197420c226a7dddb2a9797769222b3073c263a535115ddd5ff1f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/5bf69c239567197420c226a7dddb2a9797769222b3073c263a535115ddd
5ff1f/userdata","rootfs":"/var/lib/containers/storage/overlay/f2563d9df107c871f3d3aa58fa74912f80cebfb04c1448ae8c273f5122d1b757/merged","created":"2024-08-05T23:01:28.764106681Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b89b71db","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b89b71db\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5bf69c239567197420c226a7dddb2a9797769222b3073c263a535115ddd5ff1f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:01:28.665948851Z","io.
kubernetes.cri-o.Image":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.30.3","io.kubernetes.cri-o.ImageRef":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-220049\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"144585ba3ed1f613d186728b431d0d09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-220049_144585ba3ed1f613d186728b431d0d09/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f2563d9df107c871f3d3aa58fa74912f80cebfb04c1448ae8c273f5122d1b757/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-220049_kube-system_144585ba3ed1f613d186728b431d0d09_0","io.kubernetes.cri-o.Re
solvPath":"/run/containers/storage/overlay-containers/534d6709ca1139ac1cb85e410f24705aa83ae4613c10b8a8d60c7d62ff3a1875/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"534d6709ca1139ac1cb85e410f24705aa83ae4613c10b8a8d60c7d62ff3a1875","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-220049_kube-system_144585ba3ed1f613d186728b431d0d09_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/144585ba3ed1f613d186728b431d0d09/containers/kube-apiserver/f56802ab\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/144585ba3ed1f613d186728b431d0d09/etc-host
s\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-220049","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"144585ba3ed1f613d186728b431d0d09","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"144585ba3e
d1f613d186728b431d0d09","kubernetes.io/config.seen":"2024-08-05T23:01:28.147032170Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b/userdata","rootfs":"/var/lib/containers/storage/overlay/6ef724c606c252ec701460437f2d987ff650e883d74a76b3ce6e6936c7921e73/merged","created":"2024-08-05T23:02:17.383659112Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7337c8d9","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7337c8d9\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.termin
ationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:17.194410503Z","io.kubernetes.cri-o.Image":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.30.3","io.kubernetes.cri-o.ImageRef":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-220049\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e0dd1a64bac1e7f195bdc9e6d61e2ebf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-220049_e0dd1a64bac1e7f195bdc9e6d61e2ebf/kub
e-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6ef724c606c252ec701460437f2d987ff650e883d74a76b3ce6e6936c7921e73/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-220049_kube-system_e0dd1a64bac1e7f195bdc9e6d61e2ebf_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bb53fda2a614bd796df86e7cf5bb55c695b7d50dff6f99c504475c64d8f40314/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bb53fda2a614bd796df86e7cf5bb55c695b7d50dff6f99c504475c64d8f40314","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-220049_kube-system_e0dd1a64bac1e7f195bdc9e6d61e2ebf_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e0dd1a64bac1e7f
195bdc9e6d61e2ebf/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e0dd1a64bac1e7f195bdc9e6d61e2ebf/containers/kube-scheduler/132715fe\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-220049","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e0dd1a64bac1e7f195bdc9e6d61e2ebf","kubernetes.io/config.hash":"e0dd1a64bac1e7f195bdc9e6d61e2ebf","kubernetes.io/config.seen":"2024-08-05T23:01:28.147035600Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098","pid":0,"status":"stopped","bundle":"/run/containers/stor
age/overlay-containers/7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098/userdata","rootfs":"/var/lib/containers/storage/overlay/a3031481e072520bfd6d2d133d50ecd83fcac37faaa692093a3d8694ae7bbc65/merged","created":"2024-08-05T23:02:17.396142792Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"64d54869","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"64d54869\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098","io.kubernetes.cri-o.Con
tainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:17.157009704Z","io.kubernetes.cri-o.Image":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.30.3","io.kubernetes.cri-o.ImageRef":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-220049\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"afe54f2f8a1ce000daa47bee554d110a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-220049_afe54f2f8a1ce000daa47bee554d110a/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a3031481e072520bfd6d2d133d50ecd83fcac37faaa692093a3d8694ae7bbc65/
merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-220049_kube-system_afe54f2f8a1ce000daa47bee554d110a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d7e2904c806891f7eafacb27f88bc9d130ab29b5b57b26164874b91bed00e0d5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d7e2904c806891f7eafacb27f88bc9d130ab29b5b57b26164874b91bed00e0d5","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-220049_kube-system_afe54f2f8a1ce000daa47bee554d110a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/afe54f2f8a1ce000daa47bee554d110a/containers/kube-controller-ma
nager/73bc8565\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/afe54f2f8a1ce000daa47bee554d110a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"p
ropagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-220049","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"afe54f2f8a1ce000daa47bee554d110a","kubernetes.io/config.hash":"afe54f2f8a1ce000daa47bee554d110a","kubernetes.io/config.seen":"2024-08-05T23:01:28.147034139Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e/userdata","rootfs":"/var/lib/containers/storage/overlay/a0051cc392270488ef5d371e9d1db9ecf916f3040b126d861d71b8e5e4d53dcb/merged",
"created":"2024-08-05T23:02:17.405258335Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"debd8b","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"debd8b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:17.127711253Z","io.kubernetes.cri-o.Image":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","io.kubernetes.cri-o.ImageName":"docker.io/kindest/
kindnetd:v20240730-75a5af0c","io.kubernetes.cri-o.ImageRef":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-v22mx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6683675b-0f55-48bb-91fa-07c818972a97\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-v22mx_6683675b-0f55-48bb-91fa-07c818972a97/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a0051cc392270488ef5d371e9d1db9ecf916f3040b126d861d71b8e5e4d53dcb/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-v22mx_kube-system_6683675b-0f55-48bb-91fa-07c818972a97_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6cce44ad7603389498dc4a287391f60217a2497d873f10df3009e6816ce5fde7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6cce
44ad7603389498dc4a287391f60217a2497d873f10df3009e6816ce5fde7","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-v22mx_kube-system_6683675b-0f55-48bb-91fa-07c818972a97_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6683675b-0f55-48bb-91fa-07c818972a97/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6683675b-0f55-48bb-91fa-07c818972a97/containers/kindnet-cni/0501fc49\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":f
alse},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/6683675b-0f55-48bb-91fa-07c818972a97/volumes/kubernetes.io~projected/kube-api-access-5qhmz\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-v22mx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6683675b-0f55-48bb-91fa-07c818972a97","kubernetes.io/config.seen":"2024-08-05T23:01:49.863809840Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a/userdata","rootfs":"/var/lib/containers/storage/overlay/626eb52
1602a305dd5d93af13d54f91ea3a1503910e2c072b64aaa2258347a9d/merged","created":"2024-08-05T23:02:17.385783411Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b89b71db","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b89b71db\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:17.067300386Z","io.kubernetes.cri-o.Image":"61773190d42ff0792f3bab2658e80b1c07519170955b
b350b153b564ef28f4ca","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.30.3","io.kubernetes.cri-o.ImageRef":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-220049\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"144585ba3ed1f613d186728b431d0d09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-220049_144585ba3ed1f613d186728b431d0d09/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/626eb521602a305dd5d93af13d54f91ea3a1503910e2c072b64aaa2258347a9d/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-220049_kube-system_144585ba3ed1f613d186728b431d0d09_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5
34d6709ca1139ac1cb85e410f24705aa83ae4613c10b8a8d60c7d62ff3a1875/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"534d6709ca1139ac1cb85e410f24705aa83ae4613c10b8a8d60c7d62ff3a1875","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-220049_kube-system_144585ba3ed1f613d186728b431d0d09_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/144585ba3ed1f613d186728b431d0d09/containers/kube-apiserver/6ed91b27\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/144585ba3ed1f613d186728b431d0d09/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_rela
bel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-220049","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"144585ba3ed1f613d186728b431d0d09","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"144585ba3ed1f613d186728b431d0d09","kubernetes.io/config.seen":"20
24-08-05T23:01:28.147032170Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93deca5419831994620bb7a0785d500692ac95e06175fa2879993aefbde683d1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/93deca5419831994620bb7a0785d500692ac95e06175fa2879993aefbde683d1/userdata","rootfs":"/var/lib/containers/storage/overlay/1423858f023e3c5e8f264ccd7669d730ec78cbbce0be43fa3670872ea269d48a/merged","created":"2024-08-05T23:02:04.783726151Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"89a8ef20","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"
File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"89a8ef20\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"93deca5419831994620bb7a0785d500692ac95e06175fa2879993aefbde683d1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:04.751974126Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/
coredns:v1.11.1","io.kubernetes.cri-o.ImageRef":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-7db6d8ff4d-v6mkh\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0cd4cbba-7612-4bed-ab12-acad18367268\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-7db6d8ff4d-v6mkh_0cd4cbba-7612-4bed-ab12-acad18367268/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1423858f023e3c5e8f264ccd7669d730ec78cbbce0be43fa3670872ea269d48a/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-7db6d8ff4d-v6mkh_kube-system_0cd4cbba-7612-4bed-ab12-acad18367268_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/714228be44cad0e5f396b40a4e2b2aacdab9b369f8ca173ebb839a434dfcb042/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"714228be44cad
0e5f396b40a4e2b2aacdab9b369f8ca173ebb839a434dfcb042","io.kubernetes.cri-o.SandboxName":"k8s_coredns-7db6d8ff4d-v6mkh_kube-system_0cd4cbba-7612-4bed-ab12-acad18367268_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/0cd4cbba-7612-4bed-ab12-acad18367268/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0cd4cbba-7612-4bed-ab12-acad18367268/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0cd4cbba-7612-4bed-ab12-acad18367268/containers/coredns/8b7d5f6a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kuberne
tes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/0cd4cbba-7612-4bed-ab12-acad18367268/volumes/kubernetes.io~projected/kube-api-access-mgbjr\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-7db6d8ff4d-v6mkh","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0cd4cbba-7612-4bed-ab12-acad18367268","kubernetes.io/config.seen":"2024-08-05T23:02:04.370866427Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9de4124c5ba08b17c2617effe5baa8205f9e312a118e51870cf29d1aadedad29","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9de4124c5ba08b17c2617effe5baa8205f9e312a118e51870cf29d1aadedad29/userdata","rootfs":"/var/lib/containers/storage/overlay/77e538d13eec33640498d319d3efab34e5fff33c9961610f37203cedd376c4cb/merged","created":"2024-08-05T23:01:28.755329392Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.contain
er.hash":"64d54869","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"64d54869\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9de4124c5ba08b17c2617effe5baa8205f9e312a118e51870cf29d1aadedad29","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:01:28.714340686Z","io.kubernetes.cri-o.Image":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.30.3","io.kubernetes.cri-o.ImageRef":"8e97cdb19e7cc420af7c71de8b5c9ab5
36bd278758c8c0878c464b833d91b31a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-220049\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"afe54f2f8a1ce000daa47bee554d110a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-220049_afe54f2f8a1ce000daa47bee554d110a/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/77e538d13eec33640498d319d3efab34e5fff33c9961610f37203cedd376c4cb/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-220049_kube-system_afe54f2f8a1ce000daa47bee554d110a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d7e2904c806891f7eafacb27f88bc9d130ab29b5b57b26164874b91bed00e0d5/userdata/resolv.conf","io.kubernetes.cri-o.San
dboxID":"d7e2904c806891f7eafacb27f88bc9d130ab29b5b57b26164874b91bed00e0d5","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-220049_kube-system_afe54f2f8a1ce000daa47bee554d110a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/afe54f2f8a1ce000daa47bee554d110a/containers/kube-controller-manager/1c5e5c89\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/afe54f2f8a1ce000daa47bee554d110a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"reado
nly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-220049","io.kubernetes.pod.
namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"afe54f2f8a1ce000daa47bee554d110a","kubernetes.io/config.hash":"afe54f2f8a1ce000daa47bee554d110a","kubernetes.io/config.seen":"2024-08-05T23:01:28.147034139Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86/userdata","rootfs":"/var/lib/containers/storage/overlay/92be30ef88047ca74833f238d5639434c8606b9f502bea6b225647f3445240b3/merged","created":"2024-08-05T23:02:17.382790803Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dad7309d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File",
"io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dad7309d\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:17.076283697Z","io.kubernetes.cri-o.Image":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri-o.ImageRef":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-220049\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b6481bdda6d3ce187e156967d130531
f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-220049_b6481bdda6d3ce187e156967d130531f/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/92be30ef88047ca74833f238d5639434c8606b9f502bea6b225647f3445240b3/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-220049_kube-system_b6481bdda6d3ce187e156967d130531f_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8673f2c6b5721a2b93b42d6ab6eb2be9284163dfd76da4ffcf840a0901f108a4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8673f2c6b5721a2b93b42d6ab6eb2be9284163dfd76da4ffcf840a0901f108a4","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-220049_kube-system_b6481bdda6d3ce187e156967d130531f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_pat
h\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b6481bdda6d3ce187e156967d130531f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b6481bdda6d3ce187e156967d130531f/containers/etcd/6c755b56\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-220049","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b6481bdda6d3ce187e156967d130531f","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b6481bdda6d3ce187e156967
d130531f","kubernetes.io/config.seen":"2024-08-05T23:01:28.147025434Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35/userdata","rootfs":"/var/lib/containers/storage/overlay/adfd861a7265e95d65c8a8db7290e47b92495d07313634c81c0b1f2393f1458c/merged","created":"2024-08-05T23:02:17.38440917Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a6d43dff","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a6d43dff\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":
\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:02:17.170384379Z","io.kubernetes.cri-o.Image":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.30.3","io.kubernetes.cri-o.ImageRef":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-pbqwk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d2253415-e3c3-4ca2-9076-693b13e76c5d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-pbqwk_d2253415-e3c3-4ca2-9076-693b13e76c5d/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"n
ame\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/adfd861a7265e95d65c8a8db7290e47b92495d07313634c81c0b1f2393f1458c/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-pbqwk_kube-system_d2253415-e3c3-4ca2-9076-693b13e76c5d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0ce0f74f0c68ef9bfff4199afd5dcb474ce2a59b18a270ce53c24181b972d93d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0ce0f74f0c68ef9bfff4199afd5dcb474ce2a59b18a270ce53c24181b972d93d","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-pbqwk_kube-system_d2253415-e3c3-4ca2-9076-693b13e76c5d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/module
s\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d2253415-e3c3-4ca2-9076-693b13e76c5d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d2253415-e3c3-4ca2-9076-693b13e76c5d/containers/kube-proxy/34cb85e5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/d2253415-e3c3-4ca2-9076-693b13e76c5d/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d2253415-e3c3-4ca2-9076-693b13e76c5d/volumes/kubernetes.io~projected/kube-api-access-rsf8l\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube
-proxy-pbqwk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d2253415-e3c3-4ca2-9076-693b13e76c5d","kubernetes.io/config.seen":"2024-08-05T23:01:49.936172162Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847/userdata","rootfs":"/var/lib/containers/storage/overlay/13cfc763a968cc9b0041bf38564b423b118bf27a88cd23d19867989fd9877874/merged","created":"2024-08-05T23:01:28.762482612Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dad7309d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annota
tions":"{\"io.kubernetes.container.hash\":\"dad7309d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-05T23:01:28.686197191Z","io.kubernetes.cri-o.Image":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri-o.ImageRef":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-220049\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b6481bdda6d3ce187e156967d130531f\"}","io.kubernetes.cri-o.
LogPath":"/var/log/pods/kube-system_etcd-functional-220049_b6481bdda6d3ce187e156967d130531f/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/13cfc763a968cc9b0041bf38564b423b118bf27a88cd23d19867989fd9877874/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-220049_kube-system_b6481bdda6d3ce187e156967d130531f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8673f2c6b5721a2b93b42d6ab6eb2be9284163dfd76da4ffcf840a0901f108a4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8673f2c6b5721a2b93b42d6ab6eb2be9284163dfd76da4ffcf840a0901f108a4","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-220049_kube-system_b6481bdda6d3ce187e156967d130531f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/l
ib/kubelet/pods/b6481bdda6d3ce187e156967d130531f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b6481bdda6d3ce187e156967d130531f/containers/etcd/96581622\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-220049","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b6481bdda6d3ce187e156967d130531f","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b6481bdda6d3ce187e156967d130531f","kubernetes.io/config.seen":"20
24-08-05T23:01:28.147025434Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0805 23:02:48.246875 1587477 cri.go:126] list returned 13 containers
	I0805 23:02:48.246884 1587477 cri.go:129] container: {ID:0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628 Status:stopped}
	I0805 23:02:48.246897 1587477 cri.go:135] skipping {0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246905 1587477 cri.go:129] container: {ID:19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5 Status:stopped}
	I0805 23:02:48.246909 1587477 cri.go:135] skipping {19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246914 1587477 cri.go:129] container: {ID:4a1c04f0063b4ca5cb7ec182c9f9208402d322fec96816517aeed97981d655c4 Status:stopped}
	I0805 23:02:48.246917 1587477 cri.go:135] skipping {4a1c04f0063b4ca5cb7ec182c9f9208402d322fec96816517aeed97981d655c4 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246921 1587477 cri.go:129] container: {ID:5bf69c239567197420c226a7dddb2a9797769222b3073c263a535115ddd5ff1f Status:stopped}
	I0805 23:02:48.246924 1587477 cri.go:135] skipping {5bf69c239567197420c226a7dddb2a9797769222b3073c263a535115ddd5ff1f stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246930 1587477 cri.go:129] container: {ID:707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b Status:stopped}
	I0805 23:02:48.246935 1587477 cri.go:135] skipping {707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246940 1587477 cri.go:129] container: {ID:7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098 Status:stopped}
	I0805 23:02:48.246943 1587477 cri.go:135] skipping {7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246948 1587477 cri.go:129] container: {ID:8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e Status:stopped}
	I0805 23:02:48.246951 1587477 cri.go:135] skipping {8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246956 1587477 cri.go:129] container: {ID:929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a Status:stopped}
	I0805 23:02:48.246960 1587477 cri.go:135] skipping {929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246964 1587477 cri.go:129] container: {ID:93deca5419831994620bb7a0785d500692ac95e06175fa2879993aefbde683d1 Status:stopped}
	I0805 23:02:48.246967 1587477 cri.go:135] skipping {93deca5419831994620bb7a0785d500692ac95e06175fa2879993aefbde683d1 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246971 1587477 cri.go:129] container: {ID:9de4124c5ba08b17c2617effe5baa8205f9e312a118e51870cf29d1aadedad29 Status:stopped}
	I0805 23:02:48.246974 1587477 cri.go:135] skipping {9de4124c5ba08b17c2617effe5baa8205f9e312a118e51870cf29d1aadedad29 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246979 1587477 cri.go:129] container: {ID:aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86 Status:stopped}
	I0805 23:02:48.246986 1587477 cri.go:135] skipping {aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246990 1587477 cri.go:129] container: {ID:c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35 Status:stopped}
	I0805 23:02:48.246994 1587477 cri.go:135] skipping {c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.246999 1587477 cri.go:129] container: {ID:d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847 Status:stopped}
	I0805 23:02:48.247003 1587477 cri.go:135] skipping {d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847 stopped}: state = "stopped", want "paused"
	I0805 23:02:48.247059 1587477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 23:02:48.260980 1587477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 23:02:48.260989 1587477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 23:02:48.261044 1587477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 23:02:48.275847 1587477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:02:48.276481 1587477 kubeconfig.go:125] found "functional-220049" server: "https://192.168.49.2:8441"
	I0805 23:02:48.278060 1587477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 23:02:48.295572 1587477 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-08-05 23:01:20.179489625 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-08-05 23:02:47.530768759 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0805 23:02:48.295603 1587477 kubeadm.go:1160] stopping kube-system containers ...
	I0805 23:02:48.295613 1587477 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0805 23:02:48.295671 1587477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 23:02:48.352458 1587477 cri.go:89] found id: "0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628"
	I0805 23:02:48.352471 1587477 cri.go:89] found id: "19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5"
	I0805 23:02:48.352474 1587477 cri.go:89] found id: "707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b"
	I0805 23:02:48.352477 1587477 cri.go:89] found id: "c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35"
	I0805 23:02:48.352480 1587477 cri.go:89] found id: "7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098"
	I0805 23:02:48.352483 1587477 cri.go:89] found id: "8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e"
	I0805 23:02:48.352486 1587477 cri.go:89] found id: "aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86"
	I0805 23:02:48.352489 1587477 cri.go:89] found id: "929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a"
	I0805 23:02:48.352492 1587477 cri.go:89] found id: "d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847"
	I0805 23:02:48.352498 1587477 cri.go:89] found id: ""
	I0805 23:02:48.352503 1587477 cri.go:234] Stopping containers: [0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628 19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5 707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35 7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098 8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86 929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847]
	I0805 23:02:48.352596 1587477 ssh_runner.go:195] Run: which crictl
	I0805 23:02:48.357414 1587477 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628 19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5 707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35 7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098 8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86 929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847
	W0805 23:02:48.428689 1587477 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628 19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5 707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35 7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098 8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86 929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847: Process exited with status 1
	stdout:
	0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628
	19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5
	707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b
	c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35
	7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098
	8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e
	aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86
	929d6b420b59e45f01d2a59b778f6eaca004ddd623f171c219cc1884d6cc608a
	
	stderr:
	E0805 23:02:48.425784    4478 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847\": container with ID starting with d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847 not found: ID does not exist" containerID="d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847"
	time="2024-08-05T23:02:48Z" level=fatal msg="stopping the container \"d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847\": rpc error: code = NotFound desc = could not find container \"d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847\": container with ID starting with d9503050b831ad32e239a7780da6c3e65c8c009eed8d3898cb9a5b032c9c3847 not found: ID does not exist"
	I0805 23:02:48.428758 1587477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 23:02:48.527778 1587477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 23:02:48.537403 1587477 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug  5 23:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug  5 23:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug  5 23:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug  5 23:01 /etc/kubernetes/scheduler.conf
	
	I0805 23:02:48.537463 1587477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0805 23:02:48.546902 1587477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0805 23:02:48.556057 1587477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0805 23:02:48.565426 1587477 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:02:48.565483 1587477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 23:02:48.574496 1587477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0805 23:02:48.583532 1587477 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:02:48.583589 1587477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 23:02:48.592596 1587477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 23:02:48.602150 1587477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 23:02:48.659177 1587477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 23:02:51.941059 1587477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.281852548s)
	I0805 23:02:51.941083 1587477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 23:02:52.167230 1587477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 23:02:52.256900 1587477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 23:02:52.335371 1587477 api_server.go:52] waiting for apiserver process to appear ...
	I0805 23:02:52.335441 1587477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:02:52.835817 1587477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:02:53.336260 1587477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:02:53.368467 1587477 api_server.go:72] duration metric: took 1.033098523s to wait for apiserver process to appear ...
	I0805 23:02:53.368483 1587477 api_server.go:88] waiting for apiserver healthz status ...
	I0805 23:02:53.368501 1587477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 23:02:57.340369 1587477 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 23:02:57.340387 1587477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 23:02:57.340399 1587477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 23:02:57.357131 1587477 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 23:02:57.357147 1587477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 23:02:57.369304 1587477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 23:02:57.404140 1587477 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 23:02:57.404170 1587477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 23:02:57.868698 1587477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 23:02:57.876285 1587477 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 23:02:57.876305 1587477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 23:02:58.369445 1587477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 23:02:58.386245 1587477 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 23:02:58.386263 1587477 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 23:02:58.868639 1587477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 23:02:58.880112 1587477 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0805 23:02:58.902454 1587477 api_server.go:141] control plane version: v1.30.3
	I0805 23:02:58.902472 1587477 api_server.go:131] duration metric: took 5.53398395s to wait for apiserver health ...
	I0805 23:02:58.902480 1587477 cni.go:84] Creating CNI manager for ""
	I0805 23:02:58.902486 1587477 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 23:02:58.905761 1587477 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 23:02:58.908417 1587477 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 23:02:58.912640 1587477 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 23:02:58.912652 1587477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 23:02:58.951223 1587477 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 23:02:59.471909 1587477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 23:02:59.490365 1587477 system_pods.go:59] 8 kube-system pods found
	I0805 23:02:59.490385 1587477 system_pods.go:61] "coredns-7db6d8ff4d-v6mkh" [0cd4cbba-7612-4bed-ab12-acad18367268] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 23:02:59.490395 1587477 system_pods.go:61] "etcd-functional-220049" [2a4e50f2-6778-4e92-9bcc-65c8cecce3c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 23:02:59.490402 1587477 system_pods.go:61] "kindnet-v22mx" [6683675b-0f55-48bb-91fa-07c818972a97] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0805 23:02:59.490408 1587477 system_pods.go:61] "kube-apiserver-functional-220049" [70313ae8-c5f1-4a75-9dfa-01c8e9f923eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 23:02:59.490413 1587477 system_pods.go:61] "kube-controller-manager-functional-220049" [eab217ae-430f-4312-a7e9-2bdcc81f725b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 23:02:59.490416 1587477 system_pods.go:61] "kube-proxy-pbqwk" [d2253415-e3c3-4ca2-9076-693b13e76c5d] Running
	I0805 23:02:59.490423 1587477 system_pods.go:61] "kube-scheduler-functional-220049" [d85f4c60-d19e-45e6-a60a-b25f5958dfca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 23:02:59.490428 1587477 system_pods.go:61] "storage-provisioner" [cecc1171-d50f-4849-9e79-0df5a085ff0c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 23:02:59.490434 1587477 system_pods.go:74] duration metric: took 18.513523ms to wait for pod list to return data ...
	I0805 23:02:59.490440 1587477 node_conditions.go:102] verifying NodePressure condition ...
	I0805 23:02:59.499370 1587477 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0805 23:02:59.499389 1587477 node_conditions.go:123] node cpu capacity is 2
	I0805 23:02:59.499398 1587477 node_conditions.go:105] duration metric: took 8.953814ms to run NodePressure ...
	I0805 23:02:59.499420 1587477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 23:02:59.775581 1587477 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 23:02:59.780501 1587477 kubeadm.go:739] kubelet initialised
	I0805 23:02:59.780511 1587477 kubeadm.go:740] duration metric: took 4.916481ms waiting for restarted kubelet to initialise ...
	I0805 23:02:59.780518 1587477 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:02:59.786725 1587477 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v6mkh" in "kube-system" namespace to be "Ready" ...
	I0805 23:02:59.792953 1587477 pod_ready.go:92] pod "coredns-7db6d8ff4d-v6mkh" in "kube-system" namespace has status "Ready":"True"
	I0805 23:02:59.792965 1587477 pod_ready.go:81] duration metric: took 6.22646ms for pod "coredns-7db6d8ff4d-v6mkh" in "kube-system" namespace to be "Ready" ...
	I0805 23:02:59.792975 1587477 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:01.799191 1587477 pod_ready.go:102] pod "etcd-functional-220049" in "kube-system" namespace has status "Ready":"False"
	I0805 23:03:04.298836 1587477 pod_ready.go:102] pod "etcd-functional-220049" in "kube-system" namespace has status "Ready":"False"
	I0805 23:03:06.299754 1587477 pod_ready.go:102] pod "etcd-functional-220049" in "kube-system" namespace has status "Ready":"False"
	I0805 23:03:08.798824 1587477 pod_ready.go:102] pod "etcd-functional-220049" in "kube-system" namespace has status "Ready":"False"
	I0805 23:03:10.799459 1587477 pod_ready.go:102] pod "etcd-functional-220049" in "kube-system" namespace has status "Ready":"False"
	I0805 23:03:12.299627 1587477 pod_ready.go:92] pod "etcd-functional-220049" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:12.299640 1587477 pod_ready.go:81] duration metric: took 12.506658691s for pod "etcd-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.299653 1587477 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.305124 1587477 pod_ready.go:92] pod "kube-apiserver-functional-220049" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:12.305136 1587477 pod_ready.go:81] duration metric: took 5.47628ms for pod "kube-apiserver-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.305147 1587477 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.310723 1587477 pod_ready.go:92] pod "kube-controller-manager-functional-220049" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:12.310736 1587477 pod_ready.go:81] duration metric: took 5.582494ms for pod "kube-controller-manager-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.310746 1587477 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pbqwk" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.316209 1587477 pod_ready.go:92] pod "kube-proxy-pbqwk" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:12.316221 1587477 pod_ready.go:81] duration metric: took 5.468575ms for pod "kube-proxy-pbqwk" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.316231 1587477 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.321949 1587477 pod_ready.go:92] pod "kube-scheduler-functional-220049" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:12.321961 1587477 pod_ready.go:81] duration metric: took 5.723112ms for pod "kube-scheduler-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:12.321971 1587477 pod_ready.go:38] duration metric: took 12.54144513s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:03:12.321987 1587477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 23:03:12.329426 1587477 ops.go:34] apiserver oom_adj: -16
	I0805 23:03:12.329437 1587477 kubeadm.go:597] duration metric: took 24.068443427s to restartPrimaryControlPlane
	I0805 23:03:12.329445 1587477 kubeadm.go:394] duration metric: took 24.20178444s to StartCluster
	I0805 23:03:12.329460 1587477 settings.go:142] acquiring lock: {Name:mk3a1710a3f4cbefc7bc92fbb01d7e9e884b2ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:03:12.329532 1587477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 23:03:12.330182 1587477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/kubeconfig: {Name:mk27f7706a4f201bd85010407a0f2ea984ce81b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:03:12.330423 1587477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:03:12.330675 1587477 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:03:12.330713 1587477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 23:03:12.330781 1587477 addons.go:69] Setting storage-provisioner=true in profile "functional-220049"
	I0805 23:03:12.330801 1587477 addons.go:234] Setting addon storage-provisioner=true in "functional-220049"
	W0805 23:03:12.330806 1587477 addons.go:243] addon storage-provisioner should already be in state true
	I0805 23:03:12.330817 1587477 addons.go:69] Setting default-storageclass=true in profile "functional-220049"
	I0805 23:03:12.330828 1587477 host.go:66] Checking if "functional-220049" exists ...
	I0805 23:03:12.330840 1587477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-220049"
	I0805 23:03:12.331144 1587477 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
	I0805 23:03:12.331211 1587477 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
	I0805 23:03:12.334790 1587477 out.go:177] * Verifying Kubernetes components...
	I0805 23:03:12.337440 1587477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:03:12.367862 1587477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 23:03:12.370538 1587477 addons.go:234] Setting addon default-storageclass=true in "functional-220049"
	W0805 23:03:12.370549 1587477 addons.go:243] addon default-storageclass should already be in state true
	I0805 23:03:12.370574 1587477 host.go:66] Checking if "functional-220049" exists ...
	I0805 23:03:12.370987 1587477 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
	I0805 23:03:12.375901 1587477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 23:03:12.375914 1587477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 23:03:12.375980 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:03:12.408022 1587477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 23:03:12.408035 1587477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 23:03:12.408104 1587477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
	I0805 23:03:12.415660 1587477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
	I0805 23:03:12.445204 1587477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
	I0805 23:03:12.547992 1587477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:03:12.564508 1587477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 23:03:12.571616 1587477 node_ready.go:35] waiting up to 6m0s for node "functional-220049" to be "Ready" ...
	I0805 23:03:12.575283 1587477 node_ready.go:49] node "functional-220049" has status "Ready":"True"
	I0805 23:03:12.575295 1587477 node_ready.go:38] duration metric: took 3.655904ms for node "functional-220049" to be "Ready" ...
	I0805 23:03:12.575303 1587477 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:03:12.594625 1587477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 23:03:12.703927 1587477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v6mkh" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:13.103642 1587477 pod_ready.go:92] pod "coredns-7db6d8ff4d-v6mkh" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:13.103654 1587477 pod_ready.go:81] duration metric: took 399.711156ms for pod "coredns-7db6d8ff4d-v6mkh" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:13.103663 1587477 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:13.318148 1587477 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0805 23:03:13.320978 1587477 addons.go:510] duration metric: took 990.254931ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0805 23:03:13.497727 1587477 pod_ready.go:92] pod "etcd-functional-220049" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:13.497739 1587477 pod_ready.go:81] duration metric: took 394.06934ms for pod "etcd-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:13.497753 1587477 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:13.897576 1587477 pod_ready.go:92] pod "kube-apiserver-functional-220049" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:13.897588 1587477 pod_ready.go:81] duration metric: took 399.829104ms for pod "kube-apiserver-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:13.897598 1587477 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:14.297905 1587477 pod_ready.go:92] pod "kube-controller-manager-functional-220049" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:14.297917 1587477 pod_ready.go:81] duration metric: took 400.311964ms for pod "kube-controller-manager-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:14.297926 1587477 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pbqwk" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:14.697626 1587477 pod_ready.go:92] pod "kube-proxy-pbqwk" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:14.697638 1587477 pod_ready.go:81] duration metric: took 399.705783ms for pod "kube-proxy-pbqwk" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:14.697649 1587477 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:15.097661 1587477 pod_ready.go:92] pod "kube-scheduler-functional-220049" in "kube-system" namespace has status "Ready":"True"
	I0805 23:03:15.097673 1587477 pod_ready.go:81] duration metric: took 400.017895ms for pod "kube-scheduler-functional-220049" in "kube-system" namespace to be "Ready" ...
	I0805 23:03:15.097685 1587477 pod_ready.go:38] duration metric: took 2.522373037s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:03:15.097698 1587477 api_server.go:52] waiting for apiserver process to appear ...
	I0805 23:03:15.097768 1587477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:03:15.110602 1587477 api_server.go:72] duration metric: took 2.78015085s to wait for apiserver process to appear ...
	I0805 23:03:15.110620 1587477 api_server.go:88] waiting for apiserver healthz status ...
	I0805 23:03:15.110644 1587477 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0805 23:03:15.118574 1587477 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0805 23:03:15.119939 1587477 api_server.go:141] control plane version: v1.30.3
	I0805 23:03:15.119955 1587477 api_server.go:131] duration metric: took 9.329721ms to wait for apiserver health ...
	I0805 23:03:15.119963 1587477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 23:03:15.300381 1587477 system_pods.go:59] 8 kube-system pods found
	I0805 23:03:15.300397 1587477 system_pods.go:61] "coredns-7db6d8ff4d-v6mkh" [0cd4cbba-7612-4bed-ab12-acad18367268] Running
	I0805 23:03:15.300401 1587477 system_pods.go:61] "etcd-functional-220049" [2a4e50f2-6778-4e92-9bcc-65c8cecce3c1] Running
	I0805 23:03:15.300405 1587477 system_pods.go:61] "kindnet-v22mx" [6683675b-0f55-48bb-91fa-07c818972a97] Running
	I0805 23:03:15.300409 1587477 system_pods.go:61] "kube-apiserver-functional-220049" [70313ae8-c5f1-4a75-9dfa-01c8e9f923eb] Running
	I0805 23:03:15.300413 1587477 system_pods.go:61] "kube-controller-manager-functional-220049" [eab217ae-430f-4312-a7e9-2bdcc81f725b] Running
	I0805 23:03:15.300416 1587477 system_pods.go:61] "kube-proxy-pbqwk" [d2253415-e3c3-4ca2-9076-693b13e76c5d] Running
	I0805 23:03:15.300418 1587477 system_pods.go:61] "kube-scheduler-functional-220049" [d85f4c60-d19e-45e6-a60a-b25f5958dfca] Running
	I0805 23:03:15.300422 1587477 system_pods.go:61] "storage-provisioner" [cecc1171-d50f-4849-9e79-0df5a085ff0c] Running
	I0805 23:03:15.300427 1587477 system_pods.go:74] duration metric: took 180.459437ms to wait for pod list to return data ...
	I0805 23:03:15.300435 1587477 default_sa.go:34] waiting for default service account to be created ...
	I0805 23:03:15.497603 1587477 default_sa.go:45] found service account: "default"
	I0805 23:03:15.497618 1587477 default_sa.go:55] duration metric: took 197.177463ms for default service account to be created ...
	I0805 23:03:15.497627 1587477 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 23:03:15.700675 1587477 system_pods.go:86] 8 kube-system pods found
	I0805 23:03:15.700691 1587477 system_pods.go:89] "coredns-7db6d8ff4d-v6mkh" [0cd4cbba-7612-4bed-ab12-acad18367268] Running
	I0805 23:03:15.700697 1587477 system_pods.go:89] "etcd-functional-220049" [2a4e50f2-6778-4e92-9bcc-65c8cecce3c1] Running
	I0805 23:03:15.700700 1587477 system_pods.go:89] "kindnet-v22mx" [6683675b-0f55-48bb-91fa-07c818972a97] Running
	I0805 23:03:15.700703 1587477 system_pods.go:89] "kube-apiserver-functional-220049" [70313ae8-c5f1-4a75-9dfa-01c8e9f923eb] Running
	I0805 23:03:15.700707 1587477 system_pods.go:89] "kube-controller-manager-functional-220049" [eab217ae-430f-4312-a7e9-2bdcc81f725b] Running
	I0805 23:03:15.700710 1587477 system_pods.go:89] "kube-proxy-pbqwk" [d2253415-e3c3-4ca2-9076-693b13e76c5d] Running
	I0805 23:03:15.700714 1587477 system_pods.go:89] "kube-scheduler-functional-220049" [d85f4c60-d19e-45e6-a60a-b25f5958dfca] Running
	I0805 23:03:15.700717 1587477 system_pods.go:89] "storage-provisioner" [cecc1171-d50f-4849-9e79-0df5a085ff0c] Running
	I0805 23:03:15.700723 1587477 system_pods.go:126] duration metric: took 203.091245ms to wait for k8s-apps to be running ...
	I0805 23:03:15.700732 1587477 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 23:03:15.700793 1587477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:03:15.712715 1587477 system_svc.go:56] duration metric: took 11.973465ms WaitForService to wait for kubelet
	I0805 23:03:15.712734 1587477 kubeadm.go:582] duration metric: took 3.382290185s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:03:15.712765 1587477 node_conditions.go:102] verifying NodePressure condition ...
	I0805 23:03:15.897111 1587477 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0805 23:03:15.897126 1587477 node_conditions.go:123] node cpu capacity is 2
	I0805 23:03:15.897135 1587477 node_conditions.go:105] duration metric: took 184.366351ms to run NodePressure ...
	I0805 23:03:15.897147 1587477 start.go:241] waiting for startup goroutines ...
	I0805 23:03:15.897154 1587477 start.go:246] waiting for cluster config update ...
	I0805 23:03:15.897164 1587477 start.go:255] writing updated cluster config ...
	I0805 23:03:15.897487 1587477 ssh_runner.go:195] Run: rm -f paused
	I0805 23:03:15.955470 1587477 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 23:03:15.960414 1587477 out.go:177] * Done! kubectl is now configured to use "functional-220049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 23:04:07 functional-220049 crio[4201]: time="2024-08-05 23:04:07.410142263Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Aug 05 23:04:07 functional-220049 crio[4201]: time="2024-08-05 23:04:07.598419061Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=913b9541-c555-4cfd-804f-4aec1a8121a6 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:04:07 functional-220049 crio[4201]: time="2024-08-05 23:04:07.598640015Z" level=info msg="Image docker.io/nginx:alpine not found" id=913b9541-c555-4cfd-804f-4aec1a8121a6 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:04:19 functional-220049 crio[4201]: time="2024-08-05 23:04:19.378530149Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1051bb43-25e4-4baf-9396-45fa2c2f8ab9 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:04:19 functional-220049 crio[4201]: time="2024-08-05 23:04:19.378757913Z" level=info msg="Image docker.io/nginx:alpine not found" id=1051bb43-25e4-4baf-9396-45fa2c2f8ab9 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:04:37 functional-220049 crio[4201]: time="2024-08-05 23:04:37.706794107Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=4e38ecd3-82a8-406a-9080-3e591f953070 name=/runtime.v1.ImageService/PullImage
	Aug 05 23:04:37 functional-220049 crio[4201]: time="2024-08-05 23:04:37.708393093Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Aug 05 23:04:38 functional-220049 crio[4201]: time="2024-08-05 23:04:38.655507328Z" level=info msg="Checking image status: docker.io/nginx:latest" id=9c8a9155-8589-4742-93c2-bb2c0fa22a58 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:04:38 functional-220049 crio[4201]: time="2024-08-05 23:04:38.655729357Z" level=info msg="Image docker.io/nginx:latest not found" id=9c8a9155-8589-4742-93c2-bb2c0fa22a58 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:04:51 functional-220049 crio[4201]: time="2024-08-05 23:04:51.378213836Z" level=info msg="Checking image status: docker.io/nginx:latest" id=955d0e0b-8b4d-43ca-9c7c-a863852c715d name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:04:51 functional-220049 crio[4201]: time="2024-08-05 23:04:51.378442192Z" level=info msg="Image docker.io/nginx:latest not found" id=955d0e0b-8b4d-43ca-9c7c-a863852c715d name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:05:08 functional-220049 crio[4201]: time="2024-08-05 23:05:08.017837606Z" level=info msg="Pulling image: docker.io/nginx:latest" id=db24b658-93e3-418c-aac5-928c4ba2fdf4 name=/runtime.v1.ImageService/PullImage
	Aug 05 23:05:08 functional-220049 crio[4201]: time="2024-08-05 23:05:08.022187126Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Aug 05 23:05:22 functional-220049 crio[4201]: time="2024-08-05 23:05:22.378631743Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0bdbecb4-6d32-4d81-bcd2-d435ee8304cf name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:05:22 functional-220049 crio[4201]: time="2024-08-05 23:05:22.378865563Z" level=info msg="Image docker.io/nginx:alpine not found" id=0bdbecb4-6d32-4d81-bcd2-d435ee8304cf name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:05:33 functional-220049 crio[4201]: time="2024-08-05 23:05:33.377989641Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=5a15eeaf-a1f5-46f3-958c-c95bab64edb9 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:05:33 functional-220049 crio[4201]: time="2024-08-05 23:05:33.378245926Z" level=info msg="Image docker.io/nginx:alpine not found" id=5a15eeaf-a1f5-46f3-958c-c95bab64edb9 name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:06:08 functional-220049 crio[4201]: time="2024-08-05 23:06:08.626108315Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=b86687d9-e24e-483e-ae2c-71532fb170a4 name=/runtime.v1.ImageService/PullImage
	Aug 05 23:06:08 functional-220049 crio[4201]: time="2024-08-05 23:06:08.629153189Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Aug 05 23:06:21 functional-220049 crio[4201]: time="2024-08-05 23:06:21.378208099Z" level=info msg="Checking image status: docker.io/nginx:latest" id=204694be-fd66-475c-a8a6-2b8aaf01195a name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:06:21 functional-220049 crio[4201]: time="2024-08-05 23:06:21.378460650Z" level=info msg="Image docker.io/nginx:latest not found" id=204694be-fd66-475c-a8a6-2b8aaf01195a name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:06:34 functional-220049 crio[4201]: time="2024-08-05 23:06:34.378649970Z" level=info msg="Checking image status: docker.io/nginx:latest" id=8dcadd84-5f98-40c4-b275-f80fdf9b9a9f name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:06:34 functional-220049 crio[4201]: time="2024-08-05 23:06:34.378874764Z" level=info msg="Image docker.io/nginx:latest not found" id=8dcadd84-5f98-40c4-b275-f80fdf9b9a9f name=/runtime.v1.ImageService/ImageStatus
	Aug 05 23:06:38 functional-220049 crio[4201]: time="2024-08-05 23:06:38.960340555Z" level=info msg="Pulling image: docker.io/nginx:latest" id=e1ca9fad-03c1-4816-98be-71355c43e02a name=/runtime.v1.ImageService/PullImage
	Aug 05 23:06:38 functional-220049 crio[4201]: time="2024-08-05 23:06:38.962027713Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                    CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c0eb6693c57d3       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5   3 minutes ago       Running             echoserver-arm            0                   eb817784cbc33       hello-node-65f5d5cc78-qglk6
	34cae80de3af1       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                         3 minutes ago       Running             coredns                   2                   714228be44cad       coredns-7db6d8ff4d-v6mkh
	9ad5b59e1951a       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                         3 minutes ago       Running             kube-proxy                2                   0ce0f74f0c68e       kube-proxy-pbqwk
	d4f6047c67393       d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806                                         3 minutes ago       Running             kindnet-cni               2                   6cce44ad76033       kindnet-v22mx
	76f64f5168bba       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                         3 minutes ago       Running             storage-provisioner       3                   9dad465606305       storage-provisioner
	9ac9329a9e8f8       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                         3 minutes ago       Running             kube-scheduler            2                   bb53fda2a614b       kube-scheduler-functional-220049
	2a2a3a82a7505       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                         3 minutes ago       Running             kube-apiserver            0                   1d60380600c06       kube-apiserver-functional-220049
	69d6d697cb83c       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                         3 minutes ago       Running             kube-controller-manager   2                   d7e2904c80689       kube-controller-manager-functional-220049
	513211fca6586       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                         3 minutes ago       Running             etcd                      2                   8673f2c6b5721       etcd-functional-220049
	0e1478a522342       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                         4 minutes ago       Exited              storage-provisioner       2                   9dad465606305       storage-provisioner
	19619d3f019d7       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                         4 minutes ago       Exited              coredns                   1                   714228be44cad       coredns-7db6d8ff4d-v6mkh
	707520783bdd5       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                         4 minutes ago       Exited              kube-scheduler            1                   bb53fda2a614b       kube-scheduler-functional-220049
	c9ac399b3539f       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                         4 minutes ago       Exited              kube-proxy                1                   0ce0f74f0c68e       kube-proxy-pbqwk
	7efe6b5e03b4c       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                         4 minutes ago       Exited              kube-controller-manager   1                   d7e2904c80689       kube-controller-manager-functional-220049
	8de3e23dfb8a6       d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806                                         4 minutes ago       Exited              kindnet-cni               1                   6cce44ad76033       kindnet-v22mx
	aada177bb3cf6       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                         4 minutes ago       Exited              etcd                      1                   8673f2c6b5721       etcd-functional-220049
	
	
	==> coredns [19619d3f019d7f18b855168c553a8d2ca3942a463159de0b1d5366b3f0496df5] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55487 - 55757 "HINFO IN 1544756696595158188.3048452427113768177. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021963118s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [34cae80de3af1800fbed0d3a504fe02a02e92a1d729e173a779197abe8f97b17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40257 - 12683 "HINFO IN 1083015980889524796.5392573434018764787. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019627228s
	
	
	==> describe nodes <==
	Name:               functional-220049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-220049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=functional-220049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T23_01_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:01:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-220049
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:06:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:03:59 +0000   Mon, 05 Aug 2024 23:01:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:03:59 +0000   Mon, 05 Aug 2024 23:01:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:03:59 +0000   Mon, 05 Aug 2024 23:01:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:03:59 +0000   Mon, 05 Aug 2024 23:02:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-220049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 1242d3327a0a42ceb9cdd3d529443539
	  System UUID:                dd446cc5-6411-4b5e-aca4-ce8506b7c611
	  Boot ID:                    ab3fa9fd-00f6-443b-af0d-60e87e17630c
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-qglk6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-7db6d8ff4d-v6mkh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m
	  kube-system                 etcd-functional-220049                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kindnet-v22mx                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m
	  kube-system                 kube-apiserver-functional-220049             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-functional-220049    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-proxy-pbqwk                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-scheduler-functional-220049             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m57s                  kube-proxy       
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  Starting                 5m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node functional-220049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node functional-220049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x8 over 5m21s)  kubelet          Node functional-220049 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m14s                  kubelet          Node functional-220049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s                  kubelet          Node functional-220049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s                  kubelet          Node functional-220049 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           5m1s                   node-controller  Node functional-220049 event: Registered Node functional-220049 in Controller
	  Normal  NodeReady                4m45s                  kubelet          Node functional-220049 status is now: NodeReady
	  Normal  RegisteredNode           4m16s                  node-controller  Node functional-220049 event: Registered Node functional-220049 in Controller
	  Normal  Starting                 3m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet          Node functional-220049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet          Node functional-220049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x8 over 3m57s)  kubelet          Node functional-220049 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m40s                  node-controller  Node functional-220049 event: Registered Node functional-220049 in Controller
	
	
	==> dmesg <==
	[  +0.000670] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000862] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=000000008997b551
	[  +0.001025] FS-Cache: N-key=[8] 'e8633b0000000000'
	[  +0.003877] FS-Cache: Duplicate cookie detected
	[  +0.000695] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000909] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000c495d5fa
	[  +0.000976] FS-Cache: O-key=[8] 'e8633b0000000000'
	[  +0.000655] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000881] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=00000000c84903e3
	[  +0.000991] FS-Cache: N-key=[8] 'e8633b0000000000'
	[  +2.077764] FS-Cache: Duplicate cookie detected
	[  +0.000839] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000c4c8673a
	[  +0.001004] FS-Cache: O-key=[8] 'e5633b0000000000'
	[  +0.000662] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000868] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=00000000b02f196c
	[  +0.001016] FS-Cache: N-key=[8] 'e5633b0000000000'
	[  +0.396957] FS-Cache: Duplicate cookie detected
	[  +0.000666] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000938] FS-Cache: O-cookie d=0000000098a0bcea{9p.inode} n=00000000d829204a
	[  +0.001050] FS-Cache: O-key=[8] 'ed633b0000000000'
	[  +0.000691] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000884] FS-Cache: N-cookie d=0000000098a0bcea{9p.inode} n=000000008997b551
	[  +0.000977] FS-Cache: N-key=[8] 'ed633b0000000000'
	[Aug 5 21:59] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [513211fca65868fc1560c4024cf901cc0b370d2e240cbe575982fff01916c03c] <==
	{"level":"info","ts":"2024-08-05T23:02:53.065259Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T23:02:53.065406Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-05T23:02:53.065587Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:02:53.06567Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:02:53.065711Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:02:53.066011Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-05T23:02:53.067344Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-05T23:02:53.067268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-08-05T23:02:53.067494Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-05T23:02:53.067624Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:02:53.067682Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:02:54.944597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-05T23:02:54.944719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-05T23:02:54.944776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-08-05T23:02:54.944818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-08-05T23:02:54.944853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-08-05T23:02:54.944889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-08-05T23:02:54.944927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-08-05T23:02:54.94877Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-220049 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:02:54.94894Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:02:54.950765Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-05T23:02:54.952599Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:02:54.952726Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:02:54.952794Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:02:54.964574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [aada177bb3cf6230f511fb1c2f1ae88e81b479ee74916ed5e48cd1414d975c86] <==
	{"level":"info","ts":"2024-08-05T23:02:17.86188Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:02:19.291973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T23:02:19.292087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:02:19.29214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-05T23:02:19.292181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T23:02:19.292214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-08-05T23:02:19.292255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-08-05T23:02:19.292288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-08-05T23:02:19.294765Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-220049 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:02:19.294872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:02:19.295257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:02:19.296998Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-05T23:02:19.29713Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:02:19.297167Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:02:19.298639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:02:39.99131Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T23:02:39.991392Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-220049","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-08-05T23:02:39.99147Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:02:40.009173Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:02:40.065421Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:02:40.065604Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:02:40.065716Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-08-05T23:02:40.069278Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-05T23:02:40.069559Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-05T23:02:40.069628Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-220049","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 23:06:49 up  7:49,  0 users,  load average: 0.27, 0.84, 0.93
	Linux functional-220049 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8de3e23dfb8a61c8ac9475d662790180a66215e0569f5baaed8ef641a20f514e] <==
	E0805 23:02:22.725130       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0805 23:02:23.130881       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:02:23.130913       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:02:23.422852       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:02:23.422887       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:02:25.423486       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:02:25.423521       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:02:25.771475       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0805 23:02:25.771505       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0805 23:02:25.931383       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:02:25.931418       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0805 23:02:27.969974       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:02:27.970023       1 main.go:299] handling current node
	W0805 23:02:29.148779       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0805 23:02:29.148820       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0805 23:02:29.612514       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:02:29.612578       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:02:31.653677       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:02:31.653825       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:02:36.223288       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:02:36.223330       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:02:37.426626       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0805 23:02:37.426672       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0805 23:02:37.969708       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:02:37.969756       1 main.go:299] handling current node
	
	
	==> kindnet [d4f6047c673937dc25029b0a9858ed1eb82f24d6b52338aa4524c28b98d6a4a1] <==
	E0805 23:05:36.922742       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0805 23:05:39.258533       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:05:39.258574       1 main.go:299] handling current node
	I0805 23:05:49.258393       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:05:49.258432       1 main.go:299] handling current node
	I0805 23:05:59.258454       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:05:59.258515       1 main.go:299] handling current node
	I0805 23:06:09.258129       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:06:09.258251       1 main.go:299] handling current node
	W0805 23:06:11.698744       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:06:11.698777       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0805 23:06:19.257846       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:06:19.257887       1 main.go:299] handling current node
	W0805 23:06:20.730197       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0805 23:06:20.730233       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0805 23:06:29.257972       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:06:29.258010       1 main.go:299] handling current node
	W0805 23:06:36.396719       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:06:36.396761       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0805 23:06:39.257866       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:06:39.257911       1 main.go:299] handling current node
	I0805 23:06:49.258404       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0805 23:06:49.258538       1 main.go:299] handling current node
	W0805 23:06:49.464756       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:06:49.464797       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [2a2a3a82a7505ac30cb12c5b74f177fbc0a443408101c18c644378b1e3e83554] <==
	I0805 23:02:57.429426       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:02:57.430084       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:02:57.430109       1 policy_source.go:224] refreshing policies
	I0805 23:02:57.430419       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:02:57.430449       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:02:57.430456       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:02:57.430560       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:02:57.430575       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:02:57.430581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:02:57.430586       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:02:57.434103       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0805 23:02:57.436624       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 23:02:57.473089       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:02:58.230837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:02:59.460205       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:02:59.639715       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:02:59.655031       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:02:59.727571       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:02:59.735394       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:03:16.323182       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:03:19.862335       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.212.77"}
	I0805 23:03:19.882317       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:03:29.132084       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:03:29.298263       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.171.163"}
	I0805 23:03:36.529389       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.41.229"}
	
	
	==> kube-controller-manager [69d6d697cb83c464aa1dd6ffa109f2d69f477f3a0f67797c401c1a867122fde0] <==
	I0805 23:03:10.018736       1 shared_informer.go:320] Caches are synced for namespace
	I0805 23:03:10.018874       1 shared_informer.go:320] Caches are synced for stateful set
	I0805 23:03:10.018923       1 shared_informer.go:320] Caches are synced for PVC protection
	I0805 23:03:10.021347       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0805 23:03:10.021601       1 shared_informer.go:320] Caches are synced for TTL
	I0805 23:03:10.022137       1 shared_informer.go:320] Caches are synced for cronjob
	I0805 23:03:10.028244       1 shared_informer.go:320] Caches are synced for HPA
	I0805 23:03:10.029494       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0805 23:03:10.047050       1 shared_informer.go:320] Caches are synced for daemon sets
	I0805 23:03:10.050819       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0805 23:03:10.050970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.484µs"
	I0805 23:03:10.064650       1 shared_informer.go:320] Caches are synced for disruption
	I0805 23:03:10.136547       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0805 23:03:10.161696       1 shared_informer.go:320] Caches are synced for endpoint
	I0805 23:03:10.227700       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:03:10.241898       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:03:10.690881       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:03:10.713161       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:03:10.713192       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:03:29.196271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="56.843994ms"
	I0805 23:03:29.213060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="16.661272ms"
	I0805 23:03:29.213441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="56.057µs"
	I0805 23:03:29.219438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="52.726µs"
	I0805 23:03:33.573413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="17.876097ms"
	I0805 23:03:33.573888       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="122.166µs"
	
	
	==> kube-controller-manager [7efe6b5e03b4cd9c4c028cb1edb934394ea8730179533c264df5969f5fe05098] <==
	I0805 23:02:33.527337       1 shared_informer.go:320] Caches are synced for endpoint
	I0805 23:02:33.532586       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0805 23:02:33.533481       1 shared_informer.go:320] Caches are synced for namespace
	I0805 23:02:33.537438       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0805 23:02:33.537580       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0805 23:02:33.540181       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0805 23:02:33.540688       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0805 23:02:33.545352       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0805 23:02:33.545557       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="90.83µs"
	I0805 23:02:33.547813       1 shared_informer.go:320] Caches are synced for stateful set
	I0805 23:02:33.547960       1 shared_informer.go:320] Caches are synced for TTL
	I0805 23:02:33.557291       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0805 23:02:33.562631       1 shared_informer.go:320] Caches are synced for disruption
	I0805 23:02:33.573784       1 shared_informer.go:320] Caches are synced for taint
	I0805 23:02:33.573902       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0805 23:02:33.573986       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-220049"
	I0805 23:02:33.574038       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0805 23:02:33.575944       1 shared_informer.go:320] Caches are synced for daemon sets
	I0805 23:02:33.606645       1 shared_informer.go:320] Caches are synced for HPA
	I0805 23:02:33.626249       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:02:33.670261       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0805 23:02:33.704398       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:02:34.139997       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:02:34.140028       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:02:34.174197       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [9ad5b59e1951aed0890e5457be081a59996138b5993be2da2cd109e9eb74c5d4] <==
	I0805 23:02:58.961293       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:02:58.976372       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0805 23:02:59.078357       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0805 23:02:59.078483       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:02:59.080886       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0805 23:02:59.080916       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0805 23:02:59.080946       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:02:59.081142       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:02:59.081161       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:02:59.082100       1 config.go:192] "Starting service config controller"
	I0805 23:02:59.082128       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:02:59.082153       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:02:59.082157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:02:59.082714       1 config.go:319] "Starting node config controller"
	I0805 23:02:59.082732       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:02:59.183094       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:02:59.183132       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:02:59.183193       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c9ac399b3539faba22b589e93745009da9589358b92a0faa74784d3f2fed8e35] <==
	I0805 23:02:20.769699       1 server_linux.go:69] "Using iptables proxy"
	E0805 23:02:21.933230       1 server.go:1051] "Failed to retrieve node info" err="nodes \"functional-220049\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]"
	I0805 23:02:23.014211       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0805 23:02:23.036193       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0805 23:02:23.036260       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:02:23.038513       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0805 23:02:23.038543       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0805 23:02:23.038573       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:02:23.038806       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:02:23.038887       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:02:23.040193       1 config.go:192] "Starting service config controller"
	I0805 23:02:23.040218       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:02:23.040243       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:02:23.040254       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:02:23.040936       1 config.go:319] "Starting node config controller"
	I0805 23:02:23.042368       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:02:23.141604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:02:23.141618       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:02:23.142770       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [707520783bdd539ed4a204d410f2b5f1235140693ed305b2ab5600fcbf29417b] <==
	E0805 23:02:21.869200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0805 23:02:21.869389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0805 23:02:21.869543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0805 23:02:21.886874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0805 23:02:21.887285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0805 23:02:21.887219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0805 23:02:21.887944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0805 23:02:21.887470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0805 23:02:21.888108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0805 23:02:21.887524       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0805 23:02:21.888215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0805 23:02:21.887564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0805 23:02:21.888313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0805 23:02:21.887625       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0805 23:02:21.888418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0805 23:02:21.887669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0805 23:02:21.887709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0805 23:02:21.888622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0805 23:02:21.887841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0805 23:02:21.888706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0805 23:02:21.888064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0805 23:02:21.888788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0805 23:02:21.888546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	I0805 23:02:21.944013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:02:39.993506       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9ac9329a9e8f8623893d9a515c6527fb481f61a91018cf07c45abbb2e90987a7] <==
	I0805 23:02:55.077936       1 serving.go:380] Generated self-signed cert in-memory
	W0805 23:02:57.315930       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 23:02:57.316044       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:02:57.316082       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 23:02:57.316126       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 23:02:57.374656       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 23:02:57.374765       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:02:57.380717       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 23:02:57.380981       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 23:02:57.381028       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 23:02:57.381085       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:02:57.481265       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:03:47 functional-220049 kubelet[4539]: I0805 23:03:47.738060    4539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9df128b8-2c4c-4713-98bb-2143e22bf026\" (UniqueName: \"kubernetes.io/host-path/cbb1dba6-11a9-4893-b4b1-a1d5af81920b-pvc-9df128b8-2c4c-4713-98bb-2143e22bf026\") pod \"sp-pod\" (UID: \"cbb1dba6-11a9-4893-b4b1-a1d5af81920b\") " pod="default/sp-pod"
	Aug 05 23:04:07 functional-220049 kubelet[4539]: E0805 23:04:07.406538    4539 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Aug 05 23:04:07 functional-220049 kubelet[4539]: E0805 23:04:07.406610    4539 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Aug 05 23:04:07 functional-220049 kubelet[4539]: E0805 23:04:07.406843    4539 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m4678,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(2cec2c2f-2ef9
-4047-aa77-868d3cb65a41): ErrImagePull: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 23:04:07 functional-220049 kubelet[4539]: E0805 23:04:07.406872    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="2cec2c2f-2ef9-4047-aa77-868d3cb65a41"
	Aug 05 23:04:07 functional-220049 kubelet[4539]: E0805 23:04:07.598841    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="2cec2c2f-2ef9-4047-aa77-868d3cb65a41"
	Aug 05 23:04:37 functional-220049 kubelet[4539]: E0805 23:04:37.706031    4539 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 05 23:04:37 functional-220049 kubelet[4539]: E0805 23:04:37.706096    4539 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 05 23:04:37 functional-220049 kubelet[4539]: E0805 23:04:37.706326    4539 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-497kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(cbb1dba6-11a9-4893-b4b1-a1d5af81920b): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 23:04:37 functional-220049 kubelet[4539]: E0805 23:04:37.706357    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="cbb1dba6-11a9-4893-b4b1-a1d5af81920b"
	Aug 05 23:04:38 functional-220049 kubelet[4539]: E0805 23:04:38.655929    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="cbb1dba6-11a9-4893-b4b1-a1d5af81920b"
	Aug 05 23:05:08 functional-220049 kubelet[4539]: E0805 23:05:08.016448    4539 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Aug 05 23:05:08 functional-220049 kubelet[4539]: E0805 23:05:08.016514    4539 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Aug 05 23:05:08 functional-220049 kubelet[4539]: E0805 23:05:08.016787    4539 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m4678,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(2cec2c2f-2ef9
-4047-aa77-868d3cb65a41): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 23:05:08 functional-220049 kubelet[4539]: E0805 23:05:08.016818    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="2cec2c2f-2ef9-4047-aa77-868d3cb65a41"
	Aug 05 23:05:22 functional-220049 kubelet[4539]: E0805 23:05:22.379240    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="2cec2c2f-2ef9-4047-aa77-868d3cb65a41"
	Aug 05 23:06:08 functional-220049 kubelet[4539]: E0805 23:06:08.625257    4539 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 05 23:06:08 functional-220049 kubelet[4539]: E0805 23:06:08.625327    4539 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 05 23:06:08 functional-220049 kubelet[4539]: E0805 23:06:08.625571    4539 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-497kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(cbb1dba6-11a9-4893-b4b1-a1d5af81920b): ErrImagePull: loading manifest for target platform: reading manifest sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 23:06:08 functional-220049 kubelet[4539]: E0805 23:06:08.625603    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="cbb1dba6-11a9-4893-b4b1-a1d5af81920b"
	Aug 05 23:06:21 functional-220049 kubelet[4539]: E0805 23:06:21.378690    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="cbb1dba6-11a9-4893-b4b1-a1d5af81920b"
	Aug 05 23:06:38 functional-220049 kubelet[4539]: E0805 23:06:38.959509    4539 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Aug 05 23:06:38 functional-220049 kubelet[4539]: E0805 23:06:38.959579    4539 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Aug 05 23:06:38 functional-220049 kubelet[4539]: E0805 23:06:38.959809    4539 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m4678,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(2cec2c2f-2ef9
-4047-aa77-868d3cb65a41): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 05 23:06:38 functional-220049 kubelet[4539]: E0805 23:06:38.959840    4539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="2cec2c2f-2ef9-4047-aa77-868d3cb65a41"
	
	
	==> storage-provisioner [0e1478a5223422ee3c0be28636644adabf88d2a6e1f4359e30e8e8c838d47628] <==
	I0805 23:02:37.059007       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 23:02:37.083505       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 23:02:37.083575       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [76f64f5168bbaf2eeb6a23addbb867abec1019452c4b39be8bfb63faf3e3e524] <==
	I0805 23:02:58.854292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 23:02:58.901265       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 23:02:58.920647       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 23:03:16.326647       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 23:03:16.326830       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-220049_f99723ab-6d39-4292-8045-14a8dde33e2c!
	I0805 23:03:16.327750       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fef5cd35-e666-46fd-a301-96d2f1e06752", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-220049_f99723ab-6d39-4292-8045-14a8dde33e2c became leader
	I0805 23:03:16.427633       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-220049_f99723ab-6d39-4292-8045-14a8dde33e2c!
	I0805 23:03:47.368735       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0805 23:03:47.368809       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2a1a6218-933a-4928-b910-0f20aa1e27a5 376 0 2024-08-05 23:01:49 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-05 23:01:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9df128b8-2c4c-4713-98bb-2143e22bf026 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9df128b8-2c4c-4713-98bb-2143e22bf026 725 0 2024-08-05 23:03:47 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-05 23:03:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-05 23:03:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0805 23:03:47.369303       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9df128b8-2c4c-4713-98bb-2143e22bf026" provisioned
	I0805 23:03:47.369325       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0805 23:03:47.369330       1 volume_store.go:212] Trying to save persistentvolume "pvc-9df128b8-2c4c-4713-98bb-2143e22bf026"
	I0805 23:03:47.369503       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9df128b8-2c4c-4713-98bb-2143e22bf026", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0805 23:03:47.391385       1 volume_store.go:219] persistentvolume "pvc-9df128b8-2c4c-4713-98bb-2143e22bf026" saved
	I0805 23:03:47.391834       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9df128b8-2c4c-4713-98bb-2143e22bf026", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9df128b8-2c4c-4713-98bb-2143e22bf026
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-220049 -n functional-220049
helpers_test.go:261: (dbg) Run:  kubectl --context functional-220049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-220049 describe pod nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-220049 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-220049/192.168.49.2
	Start Time:       Mon, 05 Aug 2024 23:03:36 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m4678 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-m4678:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m14s                default-scheduler  Successfully assigned default/nginx-svc to functional-220049
	  Warning  Failed     2m43s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    88s (x2 over 2m43s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     88s (x2 over 2m43s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    77s (x3 over 3m14s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12s (x3 over 2m43s)  kubelet            Error: ErrImagePull
	  Warning  Failed     12s (x2 over 102s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-220049/192.168.49.2
	Start Time:       Mon, 05 Aug 2024 23:03:47 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-497kt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-497kt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-220049
	  Warning  Failed     2m13s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     42s (x2 over 2m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     42s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    29s (x2 over 2m12s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     29s (x2 over 2m12s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    16s (x3 over 3m3s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-220049 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2cec2c2f-2ef9-4047-aa77-868d3cb65a41] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-220049 -n functional-220049
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2024-08-05 23:07:36.881639205 +0000 UTC m=+1136.993408037
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-220049 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-220049 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-220049/192.168.49.2
Start Time:       Mon, 05 Aug 2024 23:03:36 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m4678 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-m4678:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-220049
Warning  Failed     3m29s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     58s (x3 over 3m29s)  kubelet            Error: ErrImagePull
Warning  Failed     58s (x2 over 2m28s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    22s (x5 over 3m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     22s (x5 over 3m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    9s (x4 over 4m)      kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-220049 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-220049 logs nginx-svc -n default: exit status 1 (93.114329ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-220049 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (69.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
E0805 23:07:53.894921 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:08:21.580309 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-220049 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.105.41.229   10.105.41.229   80:31360/TCP   5m10s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (69.86s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.18
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.30.3/json-events 7.55
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.15
18 TestDownloadOnly/v1.30.3/DeleteAll 0.31
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-rc.0/json-events 13.2
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.21
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 217.46
40 TestAddons/serial/GCPAuth/Namespaces 0.22
42 TestAddons/parallel/Registry 16.88
44 TestAddons/parallel/InspektorGadget 11.77
48 TestAddons/parallel/CSI 59.71
49 TestAddons/parallel/Headlamp 11.32
50 TestAddons/parallel/CloudSpanner 6.55
51 TestAddons/parallel/LocalPath 55.74
52 TestAddons/parallel/NvidiaDevicePlugin 6.54
53 TestAddons/parallel/Yakd 11.76
54 TestAddons/StoppedEnableDisable 12.18
55 TestCertOptions 39.61
56 TestCertExpiration 242.6
58 TestForceSystemdFlag 46.34
59 TestForceSystemdEnv 42.69
65 TestErrorSpam/setup 30.81
66 TestErrorSpam/start 0.85
67 TestErrorSpam/status 0.98
68 TestErrorSpam/pause 1.71
69 TestErrorSpam/unpause 1.79
70 TestErrorSpam/stop 1.43
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 61.23
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 22.75
77 TestFunctional/serial/KubeContext 0.06
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.39
82 TestFunctional/serial/CacheCmd/cache/add_local 1.11
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
87 TestFunctional/serial/CacheCmd/cache/delete 0.12
88 TestFunctional/serial/MinikubeKubectlCmd 0.14
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
90 TestFunctional/serial/ExtraConfig 37.82
91 TestFunctional/serial/ComponentHealth 0.11
92 TestFunctional/serial/LogsCmd 1.74
93 TestFunctional/serial/LogsFileCmd 1.8
94 TestFunctional/serial/InvalidService 4.89
96 TestFunctional/parallel/ConfigCmd 0.45
97 TestFunctional/parallel/DashboardCmd 9.45
98 TestFunctional/parallel/DryRun 0.42
99 TestFunctional/parallel/InternationalLanguage 0.18
100 TestFunctional/parallel/StatusCmd 1
104 TestFunctional/parallel/ServiceCmdConnect 6.62
105 TestFunctional/parallel/AddonsCmd 0.13
108 TestFunctional/parallel/SSHCmd 0.54
109 TestFunctional/parallel/CpCmd 1.96
111 TestFunctional/parallel/FileSync 0.35
112 TestFunctional/parallel/CertSync 2.13
116 TestFunctional/parallel/NodeLabels 0.09
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
120 TestFunctional/parallel/License 0.24
121 TestFunctional/parallel/Version/short 0.05
122 TestFunctional/parallel/Version/components 1.04
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.62
128 TestFunctional/parallel/ImageCommands/Setup 0.85
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.58
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
134 TestFunctional/parallel/ServiceCmd/DeployApp 11.27
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.31
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
145 TestFunctional/parallel/ServiceCmd/List 0.33
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
148 TestFunctional/parallel/ServiceCmd/Format 0.38
149 TestFunctional/parallel/ServiceCmd/URL 0.39
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
151 TestFunctional/parallel/ProfileCmd/profile_list 0.39
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
153 TestFunctional/parallel/MountCmd/any-port 15.19
154 TestFunctional/parallel/MountCmd/specific-port 2.09
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
160 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
161 TestFunctional/delete_echo-server_images 0.04
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMultiControlPlane/serial/StartCluster 189.43
168 TestMultiControlPlane/serial/DeployApp 7.6
169 TestMultiControlPlane/serial/PingHostFromPods 1.65
170 TestMultiControlPlane/serial/AddWorkerNode 36.48
171 TestMultiControlPlane/serial/NodeLabels 0.11
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
173 TestMultiControlPlane/serial/CopyFile 19.46
174 TestMultiControlPlane/serial/StopSecondaryNode 12.76
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
176 TestMultiControlPlane/serial/RestartSecondaryNode 22.43
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 7.28
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 199.24
179 TestMultiControlPlane/serial/DeleteSecondaryNode 13.12
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
181 TestMultiControlPlane/serial/StopCluster 35.97
182 TestMultiControlPlane/serial/RestartCluster 97.33
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
184 TestMultiControlPlane/serial/AddSecondaryNode 74.84
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
189 TestJSONOutput/start/Command 60.21
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.73
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.82
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 6.16
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.21
214 TestKicCustomNetwork/create_custom_network 41.02
215 TestKicCustomNetwork/use_default_bridge_network 32.92
216 TestKicExistingNetwork 40.6
217 TestKicCustomSubnet 34.39
218 TestKicStaticIP 33.48
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 72.2
223 TestMountStart/serial/StartWithMountFirst 9.54
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 6.83
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.61
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.19
230 TestMountStart/serial/RestartStopped 8.08
231 TestMountStart/serial/VerifyMountPostStop 0.27
234 TestMultiNode/serial/FreshStart2Nodes 92.87
235 TestMultiNode/serial/DeployApp2Nodes 5.14
236 TestMultiNode/serial/PingHostFrom2Pods 0.97
237 TestMultiNode/serial/AddNode 31.05
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.37
240 TestMultiNode/serial/CopyFile 9.97
241 TestMultiNode/serial/StopNode 2.27
242 TestMultiNode/serial/StartAfterStop 9.93
243 TestMultiNode/serial/RestartKeepsNodes 82.29
244 TestMultiNode/serial/DeleteNode 5.47
245 TestMultiNode/serial/StopMultiNode 23.88
246 TestMultiNode/serial/RestartMultiNode 58.21
247 TestMultiNode/serial/ValidateNameConflict 35.8
252 TestPreload 126.6
254 TestScheduledStopUnix 105.93
257 TestInsufficientStorage 11.63
258 TestRunningBinaryUpgrade 90.94
260 TestKubernetesUpgrade 139.7
261 TestMissingContainerUpgrade 146.85
263 TestPause/serial/Start 68.37
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/StartWithK8s 42.49
267 TestNoKubernetes/serial/StartWithStopK8s 7.22
268 TestNoKubernetes/serial/Start 9.27
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
270 TestNoKubernetes/serial/ProfileList 1
271 TestNoKubernetes/serial/Stop 1.21
272 TestNoKubernetes/serial/StartNoArgs 7.29
273 TestPause/serial/SecondStartNoReconfiguration 39.06
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
275 TestPause/serial/Pause 1.08
276 TestPause/serial/VerifyStatus 0.38
277 TestPause/serial/Unpause 1.2
278 TestPause/serial/PauseAgain 1.69
279 TestPause/serial/DeletePaused 2.9
280 TestPause/serial/VerifyDeletedResources 0.45
281 TestStoppedBinaryUpgrade/Setup 0.74
282 TestStoppedBinaryUpgrade/Upgrade 105.7
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
298 TestNetworkPlugins/group/false 4.46
303 TestStartStop/group/old-k8s-version/serial/FirstStart 183.38
305 TestStartStop/group/no-preload/serial/FirstStart 74.7
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.66
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.69
308 TestStartStop/group/old-k8s-version/serial/Stop 12.57
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
310 TestStartStop/group/old-k8s-version/serial/SecondStart 134.63
311 TestStartStop/group/no-preload/serial/DeployApp 9.47
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.48
313 TestStartStop/group/no-preload/serial/Stop 12.42
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
315 TestStartStop/group/no-preload/serial/SecondStart 268.18
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
319 TestStartStop/group/old-k8s-version/serial/Pause 3.08
321 TestStartStop/group/embed-certs/serial/FirstStart 63.61
322 TestStartStop/group/embed-certs/serial/DeployApp 9.37
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
324 TestStartStop/group/embed-certs/serial/Stop 11.96
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/embed-certs/serial/SecondStart 266.64
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.85
330 TestStartStop/group/no-preload/serial/Pause 3.06
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 64.93
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.1
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
341 TestStartStop/group/embed-certs/serial/Pause 3.14
343 TestStartStop/group/newest-cni/serial/FirstStart 41.5
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.36
346 TestStartStop/group/newest-cni/serial/Stop 1.26
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
348 TestStartStop/group/newest-cni/serial/SecondStart 18.71
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.86
352 TestStartStop/group/newest-cni/serial/Pause 3.14
353 TestNetworkPlugins/group/auto/Start 62.68
354 TestNetworkPlugins/group/auto/KubeletFlags 0.34
355 TestNetworkPlugins/group/auto/NetCatPod 12.28
356 TestNetworkPlugins/group/auto/DNS 0.2
357 TestNetworkPlugins/group/auto/Localhost 0.15
358 TestNetworkPlugins/group/auto/HairPin 0.16
359 TestNetworkPlugins/group/kindnet/Start 60.71
360 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.45
364 TestNetworkPlugins/group/calico/Start 81.86
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
367 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
368 TestNetworkPlugins/group/kindnet/DNS 0.21
369 TestNetworkPlugins/group/kindnet/Localhost 0.19
370 TestNetworkPlugins/group/kindnet/HairPin 0.19
371 TestNetworkPlugins/group/custom-flannel/Start 70.73
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.37
374 TestNetworkPlugins/group/calico/NetCatPod 13.34
375 TestNetworkPlugins/group/calico/DNS 0.28
376 TestNetworkPlugins/group/calico/Localhost 0.21
377 TestNetworkPlugins/group/calico/HairPin 0.2
378 TestNetworkPlugins/group/enable-default-cni/Start 89.02
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
381 TestNetworkPlugins/group/custom-flannel/DNS 0.32
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
384 TestNetworkPlugins/group/flannel/Start 63.21
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.31
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
390 TestNetworkPlugins/group/flannel/ControllerPod 6.01
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
392 TestNetworkPlugins/group/flannel/NetCatPod 12.3
393 TestNetworkPlugins/group/bridge/Start 90.7
394 TestNetworkPlugins/group/flannel/DNS 0.17
395 TestNetworkPlugins/group/flannel/Localhost 0.18
396 TestNetworkPlugins/group/flannel/HairPin 0.22
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
398 TestNetworkPlugins/group/bridge/NetCatPod 11.25
399 TestNetworkPlugins/group/bridge/DNS 0.18
400 TestNetworkPlugins/group/bridge/Localhost 0.16
401 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (12.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-102066 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-102066 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.181417061s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-102066
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-102066: exit status 85 (69.693218ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-102066 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC |          |
	|         | -p download-only-102066        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:48:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:48:39.974802 1565126 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:48:39.974961 1565126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:48:39.974972 1565126 out.go:304] Setting ErrFile to fd 2...
	I0805 22:48:39.974978 1565126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:48:39.975220 1565126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	W0805 22:48:39.975361 1565126 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19373-1559727/.minikube/config/config.json: open /home/jenkins/minikube-integration/19373-1559727/.minikube/config/config.json: no such file or directory
	I0805 22:48:39.975796 1565126 out.go:298] Setting JSON to true
	I0805 22:48:39.976724 1565126 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27060,"bootTime":1722871060,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 22:48:39.976797 1565126 start.go:139] virtualization:  
	I0805 22:48:39.979584 1565126 out.go:97] [download-only-102066] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0805 22:48:39.979795 1565126 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 22:48:39.979848 1565126 notify.go:220] Checking for updates...
	I0805 22:48:39.981360 1565126 out.go:169] MINIKUBE_LOCATION=19373
	I0805 22:48:39.983345 1565126 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:48:39.985403 1565126 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 22:48:39.987274 1565126 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 22:48:39.989077 1565126 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0805 22:48:39.992322 1565126 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 22:48:39.992625 1565126 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:48:40.033984 1565126 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 22:48:40.034090 1565126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:48:40.095760 1565126 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-05 22:48:40.085457244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:48:40.095950 1565126 docker.go:307] overlay module found
	I0805 22:48:40.097887 1565126 out.go:97] Using the docker driver based on user configuration
	I0805 22:48:40.097925 1565126 start.go:297] selected driver: docker
	I0805 22:48:40.097939 1565126 start.go:901] validating driver "docker" against <nil>
	I0805 22:48:40.098077 1565126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:48:40.162018 1565126 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-05 22:48:40.151736726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:48:40.162184 1565126 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:48:40.162485 1565126 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0805 22:48:40.162646 1565126 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 22:48:40.164942 1565126 out.go:169] Using Docker driver with root privileges
	I0805 22:48:40.167004 1565126 cni.go:84] Creating CNI manager for ""
	I0805 22:48:40.167036 1565126 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:48:40.167049 1565126 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 22:48:40.167161 1565126 start.go:340] cluster config:
	{Name:download-only-102066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-102066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:48:40.169431 1565126 out.go:97] Starting "download-only-102066" primary control-plane node in "download-only-102066" cluster
	I0805 22:48:40.169473 1565126 cache.go:121] Beginning downloading kic base image for docker with crio
	I0805 22:48:40.171190 1565126 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0805 22:48:40.171222 1565126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 22:48:40.171386 1565126 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 22:48:40.187983 1565126 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 22:48:40.188202 1565126 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 22:48:40.188304 1565126 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 22:48:40.230047 1565126 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0805 22:48:40.230075 1565126 cache.go:56] Caching tarball of preloaded images
	I0805 22:48:40.230964 1565126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 22:48:40.233789 1565126 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 22:48:40.233817 1565126 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0805 22:48:40.320764 1565126 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0805 22:48:45.513610 1565126 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0805 22:48:45.513710 1565126 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0805 22:48:46.658307 1565126 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 22:48:46.658705 1565126 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/download-only-102066/config.json ...
	I0805 22:48:46.658740 1565126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/download-only-102066/config.json: {Name:mk79f473c804807fe3090286c99fa66a33e36523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:48:46.659495 1565126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 22:48:46.660274 1565126 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/linux/arm64/v1.20.0/kubectl
	I0805 22:48:46.716825 1565126 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	
	
	* The control-plane node download-only-102066 host does not exist
	  To start a cluster, run: "minikube start -p download-only-102066"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-102066
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-424275 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-424275 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.546557166s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-424275
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-424275: exit status 85 (149.720507ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-102066 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC |                     |
	|         | -p download-only-102066        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| delete  | -p download-only-102066        | download-only-102066 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| start   | -o=json --download-only        | download-only-424275 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC |                     |
	|         | -p download-only-424275        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:48:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:48:52.578955 1565328 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:48:52.579084 1565328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:48:52.579093 1565328 out.go:304] Setting ErrFile to fd 2...
	I0805 22:48:52.579098 1565328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:48:52.579340 1565328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 22:48:52.579743 1565328 out.go:298] Setting JSON to true
	I0805 22:48:52.580703 1565328 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27072,"bootTime":1722871060,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 22:48:52.580776 1565328 start.go:139] virtualization:  
	I0805 22:48:52.583117 1565328 out.go:97] [download-only-424275] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 22:48:52.583339 1565328 notify.go:220] Checking for updates...
	I0805 22:48:52.585016 1565328 out.go:169] MINIKUBE_LOCATION=19373
	I0805 22:48:52.586934 1565328 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:48:52.588710 1565328 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 22:48:52.591244 1565328 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 22:48:52.592863 1565328 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0805 22:48:52.596299 1565328 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 22:48:52.596589 1565328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:48:52.622173 1565328 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 22:48:52.622280 1565328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:48:52.687068 1565328 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 22:48:52.677992354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:48:52.687183 1565328 docker.go:307] overlay module found
	I0805 22:48:52.689334 1565328 out.go:97] Using the docker driver based on user configuration
	I0805 22:48:52.689364 1565328 start.go:297] selected driver: docker
	I0805 22:48:52.689372 1565328 start.go:901] validating driver "docker" against <nil>
	I0805 22:48:52.689494 1565328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:48:52.749346 1565328 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 22:48:52.739749301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:48:52.749501 1565328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:48:52.749796 1565328 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0805 22:48:52.749953 1565328 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 22:48:52.752403 1565328 out.go:169] Using Docker driver with root privileges
	I0805 22:48:52.754037 1565328 cni.go:84] Creating CNI manager for ""
	I0805 22:48:52.754062 1565328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:48:52.754075 1565328 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 22:48:52.754181 1565328 start.go:340] cluster config:
	{Name:download-only-424275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-424275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:48:52.756010 1565328 out.go:97] Starting "download-only-424275" primary control-plane node in "download-only-424275" cluster
	I0805 22:48:52.756032 1565328 cache.go:121] Beginning downloading kic base image for docker with crio
	I0805 22:48:52.758321 1565328 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0805 22:48:52.758353 1565328 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:48:52.758537 1565328 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 22:48:52.775280 1565328 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 22:48:52.775452 1565328 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 22:48:52.775488 1565328 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0805 22:48:52.775508 1565328 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0805 22:48:52.775516 1565328 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0805 22:48:52.819573 1565328 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0805 22:48:52.819602 1565328 cache.go:56] Caching tarball of preloaded images
	I0805 22:48:52.820444 1565328 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:48:52.822549 1565328 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0805 22:48:52.822579 1565328 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 ...
	I0805 22:48:52.904738 1565328 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:bace9a3612be7d31e4d3c3d446951ced -> /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-424275 host does not exist
	  To start a cluster, run: "minikube start -p download-only-424275"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-424275
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (13.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-926965 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-926965 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.195784578s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (13.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-926965
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-926965: exit status 85 (68.732041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-102066 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC |                     |
	|         | -p download-only-102066           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| delete  | -p download-only-102066           | download-only-102066 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| start   | -o=json --download-only           | download-only-424275 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC |                     |
	|         | -p download-only-424275           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| delete  | -p download-only-424275           | download-only-424275 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| start   | -o=json --download-only           | download-only-926965 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | -p download-only-926965           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:49:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:49:00.723421 1565525 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:49:00.723564 1565525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:00.723576 1565525 out.go:304] Setting ErrFile to fd 2...
	I0805 22:49:00.723581 1565525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:00.723853 1565525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 22:49:00.724247 1565525 out.go:298] Setting JSON to true
	I0805 22:49:00.725116 1565525 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27081,"bootTime":1722871060,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 22:49:00.725188 1565525 start.go:139] virtualization:  
	I0805 22:49:00.727658 1565525 out.go:97] [download-only-926965] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 22:49:00.727878 1565525 notify.go:220] Checking for updates...
	I0805 22:49:00.729650 1565525 out.go:169] MINIKUBE_LOCATION=19373
	I0805 22:49:00.731690 1565525 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:49:00.733615 1565525 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 22:49:00.735680 1565525 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 22:49:00.737622 1565525 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0805 22:49:00.741644 1565525 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 22:49:00.741980 1565525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:49:00.770152 1565525 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 22:49:00.770256 1565525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:49:00.824282 1565525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 22:49:00.814966691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:49:00.824394 1565525 docker.go:307] overlay module found
	I0805 22:49:00.827138 1565525 out.go:97] Using the docker driver based on user configuration
	I0805 22:49:00.827172 1565525 start.go:297] selected driver: docker
	I0805 22:49:00.827178 1565525 start.go:901] validating driver "docker" against <nil>
	I0805 22:49:00.827293 1565525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 22:49:00.880709 1565525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-05 22:49:00.871310057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 22:49:00.880908 1565525 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:49:00.881186 1565525 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0805 22:49:00.881343 1565525 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 22:49:00.883387 1565525 out.go:169] Using Docker driver with root privileges
	I0805 22:49:00.885266 1565525 cni.go:84] Creating CNI manager for ""
	I0805 22:49:00.885287 1565525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0805 22:49:00.885298 1565525 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 22:49:00.885394 1565525 start.go:340] cluster config:
	{Name:download-only-926965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-926965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:49:00.887967 1565525 out.go:97] Starting "download-only-926965" primary control-plane node in "download-only-926965" cluster
	I0805 22:49:00.887989 1565525 cache.go:121] Beginning downloading kic base image for docker with crio
	I0805 22:49:00.889947 1565525 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0805 22:49:00.889980 1565525 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 22:49:00.890158 1565525 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0805 22:49:00.904868 1565525 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0805 22:49:00.904998 1565525 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0805 22:49:00.905029 1565525 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0805 22:49:00.905041 1565525 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0805 22:49:00.905049 1565525 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0805 22:49:00.949404 1565525 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-arm64.tar.lz4
	I0805 22:49:00.949429 1565525 cache.go:56] Caching tarball of preloaded images
	I0805 22:49:00.950342 1565525 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 22:49:00.952893 1565525 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0805 22:49:00.952937 1565525 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I0805 22:49:01.032273 1565525 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:d6d2de0d77e93ebb28e35d8a3f6a1a31 -> /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-arm64.tar.lz4
	I0805 22:49:07.168940 1565525 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I0805 22:49:07.169052 1565525 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I0805 22:49:08.022485 1565525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0805 22:49:08.022895 1565525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/download-only-926965/config.json ...
	I0805 22:49:08.022934 1565525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/download-only-926965/config.json: {Name:mkf95614d105ab822730bc1819170f53323eddd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:08.023735 1565525 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 22:49:08.023913 1565525 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19373-1559727/.minikube/cache/linux/arm64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-926965 host does not exist
	  To start a cluster, run: "minikube start -p download-only-926965"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-926965
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-045657 --alsologtostderr --binary-mirror http://127.0.0.1:36853 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-045657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-045657
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-554168
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-554168: exit status 85 (78.691095ms)

                                                
                                                
-- stdout --
	* Profile "addons-554168" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-554168"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-554168
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-554168: exit status 85 (79.650518ms)

                                                
                                                
-- stdout --
	* Profile "addons-554168" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-554168"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (217.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-554168 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-554168 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m37.46320746s)
--- PASS: TestAddons/Setup (217.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-554168 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-554168 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.704569ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-x6xxq" [4ae86949-feca-437d-8b71-1b2bea971616] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008006748s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5pp4p" [03aac67e-c40e-4703-995f-88bab30fa562] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.009984968s
addons_test.go:342: (dbg) Run:  kubectl --context addons-554168 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-554168 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-554168 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.892718452s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 ip
2024/08/05 22:53:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n4tcl" [e6a1c2b8-213e-4ccb-8848-c9bddc5349b4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003830285s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-554168
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-554168: (5.763772357s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.474731ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-554168 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-554168 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [74280296-8e7f-4633-9d5b-1700fa6faab6] Pending
helpers_test.go:344: "task-pv-pod" [74280296-8e7f-4633-9d5b-1700fa6faab6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [74280296-8e7f-4633-9d5b-1700fa6faab6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004115285s
addons_test.go:590: (dbg) Run:  kubectl --context addons-554168 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-554168 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-554168 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-554168 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-554168 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-554168 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-554168 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [51d83bdf-5ca0-48bf-b6ec-f86eff626f49] Pending
helpers_test.go:344: "task-pv-pod-restore" [51d83bdf-5ca0-48bf-b6ec-f86eff626f49] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [51d83bdf-5ca0-48bf-b6ec-f86eff626f49] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003374511s
addons_test.go:632: (dbg) Run:  kubectl --context addons-554168 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-554168 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-554168 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-554168 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.766290534s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-554168 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-554168 --alsologtostderr -v=1: (1.070488285s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-xxnxt" [bce43d7c-3aa5-4c39-b390-388a4dd4a0e6] Pending
helpers_test.go:344: "headlamp-9d868696f-xxnxt" [bce43d7c-3aa5-4c39-b390-388a4dd4a0e6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-xxnxt" [bce43d7c-3aa5-4c39-b390-388a4dd4a0e6] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004060117s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-zsgbm" [d59f3231-dd33-4be9-9621-25dd7a6772de] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003982031s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-554168
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-554168 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-554168 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554168 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bd42e114-0758-464f-bb9a-c91421e920bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bd42e114-0758-464f-bb9a-c91421e920bb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bd42e114-0758-464f-bb9a-c91421e920bb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003459137s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-554168 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 ssh "cat /opt/local-path-provisioner/pvc-61300692-a5b6-4c41-ab58-cbf29128fef9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-554168 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-554168 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-554168 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.595044035s)
--- PASS: TestAddons/parallel/LocalPath (55.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vngm6" [bc68d922-6356-4b7c-a0af-9f0e70a94548] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004368721s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-554168
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-qtvhv" [7986bd0d-ca18-4853-9a09-6e808191f9ce] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00513064s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-554168 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-554168 addons disable yakd --alsologtostderr -v=1: (5.751390827s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-554168
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-554168: (11.907265563s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-554168
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-554168
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-554168
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (39.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-480700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-480700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.971166377s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-480700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-480700 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-480700 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-480700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-480700
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-480700: (1.985816512s)
--- PASS: TestCertOptions (39.61s)

                                                
                                    
x
+
TestCertExpiration (242.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-902407 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0805 23:43:29.307127 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-902407 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.918564992s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-902407 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-902407 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.171672802s)
helpers_test.go:175: Cleaning up "cert-expiration-902407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-902407
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-902407: (2.508248475s)
--- PASS: TestCertExpiration (242.60s)

                                                
                                    
x
+
TestForceSystemdFlag (46.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-958329 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-958329 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.329326459s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-958329 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-958329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-958329
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-958329: (2.648967477s)
--- PASS: TestForceSystemdFlag (46.34s)

                                                
                                    
x
+
TestForceSystemdEnv (42.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-605837 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-605837 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.110069938s)
helpers_test.go:175: Cleaning up "force-systemd-env-605837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-605837
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-605837: (2.579475098s)
--- PASS: TestForceSystemdEnv (42.69s)

                                                
                                    
x
+
TestErrorSpam/setup (30.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-554019 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-554019 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-554019 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-554019 --driver=docker  --container-runtime=crio: (30.811583049s)
--- PASS: TestErrorSpam/setup (30.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 stop: (1.232991956s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-554019 --log_dir /tmp/nospam-554019 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19373-1559727/.minikube/files/etc/test/nested/copy/1565121/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-220049 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-220049 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m1.227130589s)
--- PASS: TestFunctional/serial/StartWithProxy (61.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (22.75s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-220049 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-220049 --alsologtostderr -v=8: (22.744421994s)
functional_test.go:659: soft start took 22.746331919s for "functional-220049" cluster.
--- PASS: TestFunctional/serial/SoftStart (22.75s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-220049 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 cache add registry.k8s.io/pause:3.1: (1.577899613s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 cache add registry.k8s.io/pause:3.3: (1.440171786s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 cache add registry.k8s.io/pause:latest: (1.373160951s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-220049 /tmp/TestFunctionalserialCacheCmdcacheadd_local224546914/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cache add minikube-local-cache-test:functional-220049
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cache delete minikube-local-cache-test:functional-220049
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-220049
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (315.330518ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 cache reload: (1.102848811s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 kubectl -- --context functional-220049 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-220049 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-220049 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0805 23:02:53.895810 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:53.902375 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:53.912580 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:53.932810 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:53.973037 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:54.053281 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:54.213575 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:54.534065 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:55.174911 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:56.455097 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:02:59.016154 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:03:04.136864 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:03:14.377090 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-220049 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.823041353s)
functional_test.go:757: restart took 37.823140708s for "functional-220049" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-220049 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 logs: (1.742777778s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 logs --file /tmp/TestFunctionalserialLogsFileCmd3133023422/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 logs --file /tmp/TestFunctionalserialLogsFileCmd3133023422/001/logs.txt: (1.795942716s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-220049 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-220049
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-220049: exit status 115 (596.693625ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31513 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-220049 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-220049 delete -f testdata/invalidsvc.yaml: (1.052366401s)
--- PASS: TestFunctional/serial/InvalidService (4.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 config get cpus: exit status 14 (70.728122ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 config get cpus: exit status 14 (59.575269ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-220049 --alsologtostderr -v=1]
2024/08/05 23:07:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-220049 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1595257: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-220049 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-220049 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (182.590834ms)

                                                
                                                
-- stdout --
	* [functional-220049] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:07:19.154548 1594976 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:07:19.154669 1594976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:19.154677 1594976 out.go:304] Setting ErrFile to fd 2...
	I0805 23:07:19.154683 1594976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:19.154932 1594976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 23:07:19.155291 1594976 out.go:298] Setting JSON to false
	I0805 23:07:19.156238 1594976 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28179,"bootTime":1722871060,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 23:07:19.156313 1594976 start.go:139] virtualization:  
	I0805 23:07:19.159218 1594976 out.go:177] * [functional-220049] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 23:07:19.161194 1594976 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:07:19.161505 1594976 notify.go:220] Checking for updates...
	I0805 23:07:19.165109 1594976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:07:19.166939 1594976 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 23:07:19.168683 1594976 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 23:07:19.170510 1594976 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 23:07:19.172404 1594976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:07:19.174678 1594976 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:07:19.175248 1594976 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:07:19.206060 1594976 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 23:07:19.206175 1594976 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 23:07:19.276008 1594976 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-05 23:07:19.265082153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 23:07:19.276121 1594976 docker.go:307] overlay module found
	I0805 23:07:19.278139 1594976 out.go:177] * Using the docker driver based on existing profile
	I0805 23:07:19.279705 1594976 start.go:297] selected driver: docker
	I0805 23:07:19.279725 1594976 start.go:901] validating driver "docker" against &{Name:functional-220049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-220049 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:07:19.279831 1594976 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:07:19.281890 1594976 out.go:177] 
	W0805 23:07:19.283648 1594976 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0805 23:07:19.285553 1594976 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-220049 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-220049 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-220049 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.642313ms)

                                                
                                                
-- stdout --
	* [functional-220049] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:07:19.584096 1595091 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:07:19.585444 1595091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:19.585512 1595091 out.go:304] Setting ErrFile to fd 2...
	I0805 23:07:19.585534 1595091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:19.585991 1595091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 23:07:19.587613 1595091 out.go:298] Setting JSON to false
	I0805 23:07:19.588650 1595091 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28179,"bootTime":1722871060,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 23:07:19.589021 1595091 start.go:139] virtualization:  
	I0805 23:07:19.591134 1595091 out.go:177] * [functional-220049] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0805 23:07:19.593547 1595091 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:07:19.593679 1595091 notify.go:220] Checking for updates...
	I0805 23:07:19.596830 1595091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:07:19.598999 1595091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 23:07:19.600691 1595091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 23:07:19.602297 1595091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 23:07:19.604058 1595091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:07:19.606222 1595091 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:07:19.606733 1595091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:07:19.635454 1595091 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 23:07:19.635572 1595091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 23:07:19.694564 1595091 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-05 23:07:19.684538027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 23:07:19.694676 1595091 docker.go:307] overlay module found
	I0805 23:07:19.696925 1595091 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0805 23:07:19.698880 1595091 start.go:297] selected driver: docker
	I0805 23:07:19.698899 1595091 start.go:901] validating driver "docker" against &{Name:functional-220049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-220049 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:07:19.699004 1595091 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:07:19.701566 1595091 out.go:177] 
	W0805 23:07:19.703650 1595091 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 23:07:19.706164 1595091 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-220049 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-220049 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-6mdcm" [aa27e565-7d5c-48ef-92f8-14cc6c5b9f40] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-6mdcm" [aa27e565-7d5c-48ef-92f8-14cc6c5b9f40] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004344612s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32195
functional_test.go:1671: http://192.168.49.2:32195: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-6mdcm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32195
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh -n functional-220049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cp functional-220049:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd388064118/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh -n functional-220049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh -n functional-220049 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1565121/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo cat /etc/test/nested/copy/1565121/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1565121.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo cat /etc/ssl/certs/1565121.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1565121.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo cat /usr/share/ca-certificates/1565121.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15651212.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo cat /etc/ssl/certs/15651212.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15651212.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo cat /usr/share/ca-certificates/15651212.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-220049 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 ssh "sudo systemctl is-active docker": exit status 1 (340.951501ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 ssh "sudo systemctl is-active containerd": exit status 1 (361.372029ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 version -o=json --components: (1.035848217s)
--- PASS: TestFunctional/parallel/Version/components (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-220049 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-220049
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-220049 image ls --format short --alsologtostderr:
I0805 23:07:30.732196 1595663 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:30.732329 1595663 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:30.732340 1595663 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:30.732345 1595663 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:30.732625 1595663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
I0805 23:07:30.733263 1595663 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:30.733392 1595663 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:30.733906 1595663 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
I0805 23:07:30.751444 1595663 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:30.751511 1595663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
I0805 23:07:30.769601 1595663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
I0805 23:07:30.861362 1595663 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-220049 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/my-image                      | functional-220049  | 151731d09ea4f | 1.64MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 8e97cdb19e7cc | 108MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 61773190d42ff | 114MB  |
| docker.io/kicbase/echo-server           | functional-220049  | ce2d2cda2d858 | 4.79MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | d48f992a22722 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5e32961ddcea3 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 2351f570ed0ea | 89.2MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | d5e283bc63d43 | 90.3MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-220049 image ls --format table --alsologtostderr:
I0805 23:07:34.077627 1596017 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:34.077872 1596017 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:34.077889 1596017 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:34.077895 1596017 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:34.078232 1596017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
I0805 23:07:34.079485 1596017 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:34.079742 1596017 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:34.080318 1596017 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
I0805 23:07:34.098550 1596017 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:34.098622 1596017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
I0805 23:07:34.114715 1596017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
I0805 23:07:34.205169 1596017 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-220049 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493","docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size"
:"90278450"},{"id":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"90290738"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"151731d09ea4f610d11185a691f24c73e0f10a008fb7f3a3673a31beeb649aa3","repoDigests":["localhost/my-image@sha256:61dc344501aa7aba7471e1f6340860f649ac9d1aa37f0230a3dfc77dc48869ca"],"repoTags":["localhost/my-image:functional-220049"],"size":"1640226"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4
c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"113538528"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f
6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d35
69b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manage
r@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"108229958"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"89199511"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4","registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"61568326"},{"id":"8057e0500773
a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:functional-220049"],"size":"4788229"},{"id":"a89cb3a1f895d6defed33ab5387f75af8b7baef3f51e48b9b1ca70986750b655","repoDigests":["docker.io/library/27d59105f093a5ca0502085f29f1425aa641af430c7a18bcd4683c2b626c3100-tmp@sha256:bab8fead068c37b6435e5ec72b138ae68fa1f4eaba94ee3d503f4b41650ce66a"],"repoTags":[],"size":"1637644"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-220049 image ls --format json --alsologtostderr:
I0805 23:07:33.829828 1595986 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:33.829988 1595986 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:33.829999 1595986 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:33.830004 1595986 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:33.830301 1595986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
I0805 23:07:33.831018 1595986 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:33.831186 1595986 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:33.831766 1595986 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
I0805 23:07:33.855768 1595986 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:33.856505 1595986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
I0805 23:07:33.880664 1595986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
I0805 23:07:33.973177 1595986 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-220049 image ls --format yaml --alsologtostderr:
- id: d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "90290738"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "108229958"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "89199511"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:functional-220049
size: "4788229"
- id: 5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
- docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "90278450"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "113538528"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
- registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "61568326"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-220049 image ls --format yaml --alsologtostderr:
I0805 23:07:30.965708 1595695 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:30.965882 1595695 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:30.965892 1595695 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:30.965897 1595695 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:30.966139 1595695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
I0805 23:07:30.966740 1595695 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:30.966865 1595695 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:30.967382 1595695 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
I0805 23:07:30.987115 1595695 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:30.987178 1595695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
I0805 23:07:31.007590 1595695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
I0805 23:07:31.101605 1595695 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 ssh pgrep buildkitd: exit status 1 (252.870094ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image build -t localhost/my-image:functional-220049 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 image build -t localhost/my-image:functional-220049 testdata/build --alsologtostderr: (2.123972073s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-220049 image build -t localhost/my-image:functional-220049 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a89cb3a1f89
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-220049
--> 151731d09ea
Successfully tagged localhost/my-image:functional-220049
151731d09ea4f610d11185a691f24c73e0f10a008fb7f3a3673a31beeb649aa3
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-220049 image build -t localhost/my-image:functional-220049 testdata/build --alsologtostderr:
I0805 23:07:31.454324 1595783 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:31.456061 1595783 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:31.456119 1595783 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:31.456141 1595783 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:31.456498 1595783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
I0805 23:07:31.457387 1595783 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:31.458748 1595783 config.go:182] Loaded profile config "functional-220049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:31.459346 1595783 cli_runner.go:164] Run: docker container inspect functional-220049 --format={{.State.Status}}
I0805 23:07:31.477943 1595783 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:31.477999 1595783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-220049
I0805 23:07:31.495172 1595783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34647 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/functional-220049/id_rsa Username:docker}
I0805 23:07:31.589763 1595783 build_images.go:161] Building image from path: /tmp/build.3205141934.tar
I0805 23:07:31.589847 1595783 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0805 23:07:31.599409 1595783 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3205141934.tar
I0805 23:07:31.602963 1595783 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3205141934.tar: stat -c "%s %y" /var/lib/minikube/build/build.3205141934.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3205141934.tar': No such file or directory
I0805 23:07:31.603002 1595783 ssh_runner.go:362] scp /tmp/build.3205141934.tar --> /var/lib/minikube/build/build.3205141934.tar (3072 bytes)
I0805 23:07:31.627991 1595783 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3205141934
I0805 23:07:31.637243 1595783 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3205141934 -xf /var/lib/minikube/build/build.3205141934.tar
I0805 23:07:31.646503 1595783 crio.go:315] Building image: /var/lib/minikube/build/build.3205141934
I0805 23:07:31.646614 1595783 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-220049 /var/lib/minikube/build/build.3205141934 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0805 23:07:33.499178 1595783 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-220049 /var/lib/minikube/build/build.3205141934 --cgroup-manager=cgroupfs: (1.8525256s)
I0805 23:07:33.499243 1595783 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3205141934
I0805 23:07:33.508423 1595783 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3205141934.tar
I0805 23:07:33.517167 1595783 build_images.go:217] Built localhost/my-image:functional-220049 from /tmp/build.3205141934.tar
I0805 23:07:33.517200 1595783 build_images.go:133] succeeded building to: functional-220049
I0805 23:07:33.517205 1595783 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-220049
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image load --daemon docker.io/kicbase/echo-server:functional-220049 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 image load --daemon docker.io/kicbase/echo-server:functional-220049 --alsologtostderr: (1.238901168s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image load --daemon docker.io/kicbase/echo-server:functional-220049 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-220049 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-220049 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-qglk6" [5641c543-4afc-4765-9aaf-26a61c17959e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-qglk6" [5641c543-4afc-4765-9aaf-26a61c17959e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00360333s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-220049
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image load --daemon docker.io/kicbase/echo-server:functional-220049 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image save docker.io/kicbase/echo-server:functional-220049 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-220049 image save docker.io/kicbase/echo-server:functional-220049 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr: (2.304958063s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image rm docker.io/kicbase/echo-server:functional-220049 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-220049
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 image save --daemon docker.io/kicbase/echo-server:functional-220049 --alsologtostderr
E0805 23:03:34.857545 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-220049
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-220049 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-220049 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-220049 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-220049 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1591496: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-220049 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 service list -o json
functional_test.go:1490: Took "333.87877ms" to run "out/minikube-linux-arm64 -p functional-220049 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30558
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30558
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "335.337835ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "55.988448ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "327.605973ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "53.78767ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdany-port1615808225/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722899219724802033" to /tmp/TestFunctionalparallelMountCmdany-port1615808225/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722899219724802033" to /tmp/TestFunctionalparallelMountCmdany-port1615808225/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722899219724802033" to /tmp/TestFunctionalparallelMountCmdany-port1615808225/001/test-1722899219724802033
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (387.746611ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  5 23:06 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  5 23:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  5 23:06 test-1722899219724802033
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh cat /mount-9p/test-1722899219724802033
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-220049 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3e538913-0f73-4782-8b8b-0a4f4761816a] Pending
helpers_test.go:344: "busybox-mount" [3e538913-0f73-4782-8b8b-0a4f4761816a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3e538913-0f73-4782-8b8b-0a4f4761816a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3e538913-0f73-4782-8b8b-0a4f4761816a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.004408455s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-220049 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdany-port1615808225/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdspecific-port994661198/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.556107ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdspecific-port994661198/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 ssh "sudo umount -f /mount-9p": exit status 1 (252.525708ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-220049 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdspecific-port994661198/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4134138493/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4134138493/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4134138493/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T" /mount1: exit status 1 (695.25545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-220049 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-220049 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4134138493/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4134138493/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-220049 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4134138493/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-220049 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-220049
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-220049
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-220049
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (189.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-148242 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-148242 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m8.648932958s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (189.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-148242 -- rollout status deployment/busybox: (4.57058779s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-5vp47 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-fjh2z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-jmvbf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-5vp47 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-fjh2z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-jmvbf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-5vp47 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-fjh2z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-jmvbf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-5vp47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-5vp47 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-fjh2z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-fjh2z -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-jmvbf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-148242 -- exec busybox-fc5497c4f-jmvbf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-148242 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-148242 -v=7 --alsologtostderr: (35.454929144s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr: (1.019981798s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-148242 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp testdata/cp-test.txt ha-148242:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344550989/001/cp-test_ha-148242.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242:/home/docker/cp-test.txt ha-148242-m02:/home/docker/cp-test_ha-148242_ha-148242-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m02 "sudo cat /home/docker/cp-test_ha-148242_ha-148242-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242:/home/docker/cp-test.txt ha-148242-m03:/home/docker/cp-test_ha-148242_ha-148242-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m03 "sudo cat /home/docker/cp-test_ha-148242_ha-148242-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242:/home/docker/cp-test.txt ha-148242-m04:/home/docker/cp-test_ha-148242_ha-148242-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m04 "sudo cat /home/docker/cp-test_ha-148242_ha-148242-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp testdata/cp-test.txt ha-148242-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344550989/001/cp-test_ha-148242-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m02:/home/docker/cp-test.txt ha-148242:/home/docker/cp-test_ha-148242-m02_ha-148242.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242 "sudo cat /home/docker/cp-test_ha-148242-m02_ha-148242.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m02:/home/docker/cp-test.txt ha-148242-m03:/home/docker/cp-test_ha-148242-m02_ha-148242-m03.txt
E0805 23:12:53.895231 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m03 "sudo cat /home/docker/cp-test_ha-148242-m02_ha-148242-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m02:/home/docker/cp-test.txt ha-148242-m04:/home/docker/cp-test_ha-148242-m02_ha-148242-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m04 "sudo cat /home/docker/cp-test_ha-148242-m02_ha-148242-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp testdata/cp-test.txt ha-148242-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344550989/001/cp-test_ha-148242-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m03:/home/docker/cp-test.txt ha-148242:/home/docker/cp-test_ha-148242-m03_ha-148242.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242 "sudo cat /home/docker/cp-test_ha-148242-m03_ha-148242.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m03:/home/docker/cp-test.txt ha-148242-m02:/home/docker/cp-test_ha-148242-m03_ha-148242-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m02 "sudo cat /home/docker/cp-test_ha-148242-m03_ha-148242-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m03:/home/docker/cp-test.txt ha-148242-m04:/home/docker/cp-test_ha-148242-m03_ha-148242-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m04 "sudo cat /home/docker/cp-test_ha-148242-m03_ha-148242-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp testdata/cp-test.txt ha-148242-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344550989/001/cp-test_ha-148242-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m04:/home/docker/cp-test.txt ha-148242:/home/docker/cp-test_ha-148242-m04_ha-148242.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242 "sudo cat /home/docker/cp-test_ha-148242-m04_ha-148242.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m04:/home/docker/cp-test.txt ha-148242-m02:/home/docker/cp-test_ha-148242-m04_ha-148242-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m02 "sudo cat /home/docker/cp-test_ha-148242-m04_ha-148242-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 cp ha-148242-m04:/home/docker/cp-test.txt ha-148242-m03:/home/docker/cp-test_ha-148242-m04_ha-148242-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 ssh -n ha-148242-m03 "sudo cat /home/docker/cp-test_ha-148242-m04_ha-148242-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-148242 node stop m02 -v=7 --alsologtostderr: (11.994939674s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr: exit status 7 (767.428126ms)

                                                
                                                
-- stdout --
	ha-148242
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-148242-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-148242-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-148242-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:13:17.345276 1612138 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:13:17.345764 1612138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:13:17.345779 1612138 out.go:304] Setting ErrFile to fd 2...
	I0805 23:13:17.345785 1612138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:13:17.346171 1612138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 23:13:17.346413 1612138 out.go:298] Setting JSON to false
	I0805 23:13:17.346452 1612138 mustload.go:65] Loading cluster: ha-148242
	I0805 23:13:17.347265 1612138 config.go:182] Loaded profile config "ha-148242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:13:17.347296 1612138 status.go:255] checking status of ha-148242 ...
	I0805 23:13:17.348031 1612138 cli_runner.go:164] Run: docker container inspect ha-148242 --format={{.State.Status}}
	I0805 23:13:17.348620 1612138 notify.go:220] Checking for updates...
	I0805 23:13:17.372302 1612138 status.go:330] ha-148242 host status = "Running" (err=<nil>)
	I0805 23:13:17.372327 1612138 host.go:66] Checking if "ha-148242" exists ...
	I0805 23:13:17.372745 1612138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148242
	I0805 23:13:17.390530 1612138 host.go:66] Checking if "ha-148242" exists ...
	I0805 23:13:17.390856 1612138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:13:17.390902 1612138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148242
	I0805 23:13:17.415534 1612138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34652 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/ha-148242/id_rsa Username:docker}
	I0805 23:13:17.514373 1612138 ssh_runner.go:195] Run: systemctl --version
	I0805 23:13:17.518684 1612138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:13:17.533217 1612138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 23:13:17.592129 1612138 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-05 23:13:17.580281128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 23:13:17.592948 1612138 kubeconfig.go:125] found "ha-148242" server: "https://192.168.49.254:8443"
	I0805 23:13:17.592983 1612138 api_server.go:166] Checking apiserver status ...
	I0805 23:13:17.593031 1612138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:13:17.604413 1612138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1444/cgroup
	I0805 23:13:17.614382 1612138 api_server.go:182] apiserver freezer: "7:freezer:/docker/c5a7c5f229d7c0b69dcae20e71998af176b9adb2ad91ef8294b6f9b4665f743e/crio/crio-da9684dbf8c8626834f87159a74fe6def5ea8042e67c5d2125d01b1087caa588"
	I0805 23:13:17.614449 1612138 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c5a7c5f229d7c0b69dcae20e71998af176b9adb2ad91ef8294b6f9b4665f743e/crio/crio-da9684dbf8c8626834f87159a74fe6def5ea8042e67c5d2125d01b1087caa588/freezer.state
	I0805 23:13:17.623639 1612138 api_server.go:204] freezer state: "THAWED"
	I0805 23:13:17.623668 1612138 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0805 23:13:17.631661 1612138 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0805 23:13:17.631689 1612138 status.go:422] ha-148242 apiserver status = Running (err=<nil>)
	I0805 23:13:17.631699 1612138 status.go:257] ha-148242 status: &{Name:ha-148242 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:13:17.631726 1612138 status.go:255] checking status of ha-148242-m02 ...
	I0805 23:13:17.632056 1612138 cli_runner.go:164] Run: docker container inspect ha-148242-m02 --format={{.State.Status}}
	I0805 23:13:17.650050 1612138 status.go:330] ha-148242-m02 host status = "Stopped" (err=<nil>)
	I0805 23:13:17.650077 1612138 status.go:343] host is not running, skipping remaining checks
	I0805 23:13:17.650084 1612138 status.go:257] ha-148242-m02 status: &{Name:ha-148242-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:13:17.650122 1612138 status.go:255] checking status of ha-148242-m03 ...
	I0805 23:13:17.650435 1612138 cli_runner.go:164] Run: docker container inspect ha-148242-m03 --format={{.State.Status}}
	I0805 23:13:17.673657 1612138 status.go:330] ha-148242-m03 host status = "Running" (err=<nil>)
	I0805 23:13:17.673685 1612138 host.go:66] Checking if "ha-148242-m03" exists ...
	I0805 23:13:17.674101 1612138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148242-m03
	I0805 23:13:17.692144 1612138 host.go:66] Checking if "ha-148242-m03" exists ...
	I0805 23:13:17.692455 1612138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:13:17.692717 1612138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148242-m03
	I0805 23:13:17.709134 1612138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34662 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/ha-148242-m03/id_rsa Username:docker}
	I0805 23:13:17.809139 1612138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:13:17.827101 1612138 kubeconfig.go:125] found "ha-148242" server: "https://192.168.49.254:8443"
	I0805 23:13:17.827131 1612138 api_server.go:166] Checking apiserver status ...
	I0805 23:13:17.827203 1612138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:13:17.838723 1612138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1354/cgroup
	I0805 23:13:17.848742 1612138 api_server.go:182] apiserver freezer: "7:freezer:/docker/a9fabff595866cf336919cf69335871e1b8c63756bb3c436fde0564d107f0b7a/crio/crio-b8091cbf5375a5452f1e3213a2de0235de2c6374232caad5a424890e3b5c0cca"
	I0805 23:13:17.848832 1612138 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a9fabff595866cf336919cf69335871e1b8c63756bb3c436fde0564d107f0b7a/crio/crio-b8091cbf5375a5452f1e3213a2de0235de2c6374232caad5a424890e3b5c0cca/freezer.state
	I0805 23:13:17.858096 1612138 api_server.go:204] freezer state: "THAWED"
	I0805 23:13:17.858126 1612138 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0805 23:13:17.866242 1612138 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0805 23:13:17.866270 1612138 status.go:422] ha-148242-m03 apiserver status = Running (err=<nil>)
	I0805 23:13:17.866289 1612138 status.go:257] ha-148242-m03 status: &{Name:ha-148242-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:13:17.866307 1612138 status.go:255] checking status of ha-148242-m04 ...
	I0805 23:13:17.866600 1612138 cli_runner.go:164] Run: docker container inspect ha-148242-m04 --format={{.State.Status}}
	I0805 23:13:17.883500 1612138 status.go:330] ha-148242-m04 host status = "Running" (err=<nil>)
	I0805 23:13:17.883528 1612138 host.go:66] Checking if "ha-148242-m04" exists ...
	I0805 23:13:17.883813 1612138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-148242-m04
	I0805 23:13:17.901311 1612138 host.go:66] Checking if "ha-148242-m04" exists ...
	I0805 23:13:17.901615 1612138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:13:17.901661 1612138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-148242-m04
	I0805 23:13:17.924601 1612138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34667 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/ha-148242-m04/id_rsa Username:docker}
	I0805 23:13:18.026646 1612138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:13:18.053595 1612138 status.go:257] ha-148242-m04 status: &{Name:ha-148242-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 node start m02 -v=7 --alsologtostderr
E0805 23:13:29.306643 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:29.311922 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:29.322226 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:29.342588 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:29.382828 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:29.463086 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:29.623401 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:29.944491 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:30.584801 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:31.865344 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:13:34.425837 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-148242 node start m02 -v=7 --alsologtostderr: (20.792331674s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr
E0805 23:13:39.548386 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr: (1.47637942s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (7.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (7.274975791s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (7.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (199.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-148242 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-148242 -v=7 --alsologtostderr
E0805 23:13:49.788963 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:14:10.269708 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-148242 -v=7 --alsologtostderr: (36.889411113s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-148242 --wait=true -v=7 --alsologtostderr
E0805 23:14:51.229958 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:16:13.150321 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-148242 --wait=true -v=7 --alsologtostderr: (2m42.199728144s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-148242
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (199.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-148242 node delete m03 -v=7 --alsologtostderr: (12.149011869s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 stop -v=7 --alsologtostderr
E0805 23:17:53.894819 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-148242 stop -v=7 --alsologtostderr: (35.864208388s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr: exit status 7 (105.412412ms)

                                                
                                                
-- stdout --
	ha-148242
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-148242-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-148242-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:17:57.173048 1626570 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:17:57.173186 1626570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:57.173197 1626570 out.go:304] Setting ErrFile to fd 2...
	I0805 23:17:57.173202 1626570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:57.173529 1626570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 23:17:57.173738 1626570 out.go:298] Setting JSON to false
	I0805 23:17:57.173765 1626570 mustload.go:65] Loading cluster: ha-148242
	I0805 23:17:57.174471 1626570 config.go:182] Loaded profile config "ha-148242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:17:57.174491 1626570 status.go:255] checking status of ha-148242 ...
	I0805 23:17:57.175334 1626570 cli_runner.go:164] Run: docker container inspect ha-148242 --format={{.State.Status}}
	I0805 23:17:57.175332 1626570 notify.go:220] Checking for updates...
	I0805 23:17:57.193071 1626570 status.go:330] ha-148242 host status = "Stopped" (err=<nil>)
	I0805 23:17:57.193099 1626570 status.go:343] host is not running, skipping remaining checks
	I0805 23:17:57.193107 1626570 status.go:257] ha-148242 status: &{Name:ha-148242 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:57.193135 1626570 status.go:255] checking status of ha-148242-m02 ...
	I0805 23:17:57.193470 1626570 cli_runner.go:164] Run: docker container inspect ha-148242-m02 --format={{.State.Status}}
	I0805 23:17:57.215498 1626570 status.go:330] ha-148242-m02 host status = "Stopped" (err=<nil>)
	I0805 23:17:57.215517 1626570 status.go:343] host is not running, skipping remaining checks
	I0805 23:17:57.215524 1626570 status.go:257] ha-148242-m02 status: &{Name:ha-148242-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:57.215544 1626570 status.go:255] checking status of ha-148242-m04 ...
	I0805 23:17:57.215838 1626570 cli_runner.go:164] Run: docker container inspect ha-148242-m04 --format={{.State.Status}}
	I0805 23:17:57.233693 1626570 status.go:330] ha-148242-m04 host status = "Stopped" (err=<nil>)
	I0805 23:17:57.233727 1626570 status.go:343] host is not running, skipping remaining checks
	I0805 23:17:57.233735 1626570 status.go:257] ha-148242-m04 status: &{Name:ha-148242-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (97.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-148242 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0805 23:18:29.306642 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:18:56.990701 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:19:16.940977 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-148242 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m36.385442893s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (97.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-148242 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-148242 --control-plane -v=7 --alsologtostderr: (1m13.831024426s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-148242 status -v=7 --alsologtostderr: (1.011020461s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-102183 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-102183 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m0.20526163s)
--- PASS: TestJSONOutput/start/Command (60.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-102183 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.82s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-102183 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.82s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-102183 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-102183 --output=json --user=testUser: (6.155720696s)
--- PASS: TestJSONOutput/stop/Command (6.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-920788 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-920788 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.832148ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"69507769-bdd0-4c55-b95d-be75d2d2eabd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-920788] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5af66037-5f9a-40f5-b7f7-379be59514f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19373"}}
	{"specversion":"1.0","id":"f4c19536-b2ec-4e37-9021-46e92ba82683","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b79ebfc2-5f1a-449e-80d8-e7202185a151","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig"}}
	{"specversion":"1.0","id":"f9f40360-c462-4eff-9f42-9b322e38df45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube"}}
	{"specversion":"1.0","id":"3f332c43-a348-4127-88b9-c768e1dc62aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a8c42408-fbbb-4858-a154-ad56d8f9d944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a4514114-0294-4c0e-b12f-02511292d80b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-920788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-920788
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-973076 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-973076 --network=: (38.920356532s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-973076" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-973076
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-973076: (2.065686148s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.02s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-974254 --network=bridge
E0805 23:22:53.894813 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-974254 --network=bridge: (30.948125751s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-974254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-974254
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-974254: (1.951970442s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.92s)

                                                
                                    
x
+
TestKicExistingNetwork (40.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-390615 --network=existing-network
E0805 23:23:29.306687 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-390615 --network=existing-network: (38.387148595s)
helpers_test.go:175: Cleaning up "existing-network-390615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-390615
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-390615: (2.055183279s)
--- PASS: TestKicExistingNetwork (40.60s)

                                                
                                    
x
+
TestKicCustomSubnet (34.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-811136 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-811136 --subnet=192.168.60.0/24: (32.24310601s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-811136 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-811136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-811136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-811136: (2.12355441s)
--- PASS: TestKicCustomSubnet (34.39s)

                                                
                                    
x
+
TestKicStaticIP (33.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-555242 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-555242 --static-ip=192.168.200.200: (31.286698213s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-555242 ip
helpers_test.go:175: Cleaning up "static-ip-555242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-555242
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-555242: (2.060837112s)
--- PASS: TestKicStaticIP (33.48s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-196390 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-196390 --driver=docker  --container-runtime=crio: (32.434197766s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-199056 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-199056 --driver=docker  --container-runtime=crio: (34.308656126s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-196390
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-199056
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-199056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-199056
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-199056: (1.944215423s)
helpers_test.go:175: Cleaning up "first-196390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-196390
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-196390: (2.305520699s)
--- PASS: TestMinikubeProfile (72.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-509576 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-509576 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.535497374s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-509576 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-522868 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-522868 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.830000631s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-522868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-509576 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-509576 --alsologtostderr -v=5: (1.60544847s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-522868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-522868
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-522868: (1.187937192s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-522868
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-522868: (7.080006742s)
--- PASS: TestMountStart/serial/RestartStopped (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-522868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-844686 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0805 23:27:53.894547 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-844686 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.366052266s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- rollout status deployment/busybox
E0805 23:28:29.306592 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-844686 -- rollout status deployment/busybox: (3.14850555s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-rr929 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-vbg5m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-rr929 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-vbg5m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-rr929 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-vbg5m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-rr929 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-rr929 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-vbg5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-844686 -- exec busybox-fc5497c4f-vbg5m -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-844686 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-844686 -v 3 --alsologtostderr: (30.372097293s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-844686 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp testdata/cp-test.txt multinode-844686:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3270830723/001/cp-test_multinode-844686.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686:/home/docker/cp-test.txt multinode-844686-m02:/home/docker/cp-test_multinode-844686_multinode-844686-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m02 "sudo cat /home/docker/cp-test_multinode-844686_multinode-844686-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686:/home/docker/cp-test.txt multinode-844686-m03:/home/docker/cp-test_multinode-844686_multinode-844686-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m03 "sudo cat /home/docker/cp-test_multinode-844686_multinode-844686-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp testdata/cp-test.txt multinode-844686-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3270830723/001/cp-test_multinode-844686-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686-m02:/home/docker/cp-test.txt multinode-844686:/home/docker/cp-test_multinode-844686-m02_multinode-844686.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686 "sudo cat /home/docker/cp-test_multinode-844686-m02_multinode-844686.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686-m02:/home/docker/cp-test.txt multinode-844686-m03:/home/docker/cp-test_multinode-844686-m02_multinode-844686-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m03 "sudo cat /home/docker/cp-test_multinode-844686-m02_multinode-844686-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp testdata/cp-test.txt multinode-844686-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3270830723/001/cp-test_multinode-844686-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686-m03:/home/docker/cp-test.txt multinode-844686:/home/docker/cp-test_multinode-844686-m03_multinode-844686.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686 "sudo cat /home/docker/cp-test_multinode-844686-m03_multinode-844686.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 cp multinode-844686-m03:/home/docker/cp-test.txt multinode-844686-m02:/home/docker/cp-test_multinode-844686-m03_multinode-844686-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 ssh -n multinode-844686-m02 "sudo cat /home/docker/cp-test_multinode-844686-m03_multinode-844686-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-844686 node stop m03: (1.2003222s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-844686 status: exit status 7 (537.6859ms)

                                                
                                                
-- stdout --
	multinode-844686
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-844686-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-844686-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-844686 status --alsologtostderr: exit status 7 (535.065821ms)

                                                
                                                
-- stdout --
	multinode-844686
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-844686-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-844686-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:29:18.119596 1680606 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:29:18.119843 1680606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:29:18.119860 1680606 out.go:304] Setting ErrFile to fd 2...
	I0805 23:29:18.119867 1680606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:29:18.120151 1680606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 23:29:18.120383 1680606 out.go:298] Setting JSON to false
	I0805 23:29:18.120436 1680606 mustload.go:65] Loading cluster: multinode-844686
	I0805 23:29:18.120606 1680606 notify.go:220] Checking for updates...
	I0805 23:29:18.120911 1680606 config.go:182] Loaded profile config "multinode-844686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:29:18.120928 1680606 status.go:255] checking status of multinode-844686 ...
	I0805 23:29:18.121498 1680606 cli_runner.go:164] Run: docker container inspect multinode-844686 --format={{.State.Status}}
	I0805 23:29:18.146156 1680606 status.go:330] multinode-844686 host status = "Running" (err=<nil>)
	I0805 23:29:18.146184 1680606 host.go:66] Checking if "multinode-844686" exists ...
	I0805 23:29:18.146484 1680606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-844686
	I0805 23:29:18.166532 1680606 host.go:66] Checking if "multinode-844686" exists ...
	I0805 23:29:18.166958 1680606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:29:18.167015 1680606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-844686
	I0805 23:29:18.198979 1680606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34772 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/multinode-844686/id_rsa Username:docker}
	I0805 23:29:18.294078 1680606 ssh_runner.go:195] Run: systemctl --version
	I0805 23:29:18.298418 1680606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:29:18.310319 1680606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 23:29:18.369009 1680606 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-05 23:29:18.357619995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 23:29:18.369740 1680606 kubeconfig.go:125] found "multinode-844686" server: "https://192.168.58.2:8443"
	I0805 23:29:18.369782 1680606 api_server.go:166] Checking apiserver status ...
	I0805 23:29:18.369833 1680606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:29:18.383863 1680606 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	I0805 23:29:18.394938 1680606 api_server.go:182] apiserver freezer: "7:freezer:/docker/e70ae3f6444072438d2ce9e45e484d173ccac130b4fcaab1b4633b941b65b135/crio/crio-510f8b6ee55bc0301cde8c6c4843f11f0c38bc4f24244d27db67e77600512380"
	I0805 23:29:18.395061 1680606 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e70ae3f6444072438d2ce9e45e484d173ccac130b4fcaab1b4633b941b65b135/crio/crio-510f8b6ee55bc0301cde8c6c4843f11f0c38bc4f24244d27db67e77600512380/freezer.state
	I0805 23:29:18.404883 1680606 api_server.go:204] freezer state: "THAWED"
	I0805 23:29:18.404913 1680606 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0805 23:29:18.414110 1680606 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0805 23:29:18.414149 1680606 status.go:422] multinode-844686 apiserver status = Running (err=<nil>)
	I0805 23:29:18.414161 1680606 status.go:257] multinode-844686 status: &{Name:multinode-844686 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:29:18.414179 1680606 status.go:255] checking status of multinode-844686-m02 ...
	I0805 23:29:18.414500 1680606 cli_runner.go:164] Run: docker container inspect multinode-844686-m02 --format={{.State.Status}}
	I0805 23:29:18.432104 1680606 status.go:330] multinode-844686-m02 host status = "Running" (err=<nil>)
	I0805 23:29:18.432127 1680606 host.go:66] Checking if "multinode-844686-m02" exists ...
	I0805 23:29:18.432646 1680606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-844686-m02
	I0805 23:29:18.451906 1680606 host.go:66] Checking if "multinode-844686-m02" exists ...
	I0805 23:29:18.452206 1680606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:29:18.452248 1680606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-844686-m02
	I0805 23:29:18.469624 1680606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34777 SSHKeyPath:/home/jenkins/minikube-integration/19373-1559727/.minikube/machines/multinode-844686-m02/id_rsa Username:docker}
	I0805 23:29:18.565613 1680606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:29:18.577467 1680606 status.go:257] multinode-844686-m02 status: &{Name:multinode-844686-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:29:18.577502 1680606 status.go:255] checking status of multinode-844686-m03 ...
	I0805 23:29:18.577823 1680606 cli_runner.go:164] Run: docker container inspect multinode-844686-m03 --format={{.State.Status}}
	I0805 23:29:18.596065 1680606 status.go:330] multinode-844686-m03 host status = "Stopped" (err=<nil>)
	I0805 23:29:18.596091 1680606 status.go:343] host is not running, skipping remaining checks
	I0805 23:29:18.596098 1680606 status.go:257] multinode-844686-m03 status: &{Name:multinode-844686-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-844686 node start m03 -v=7 --alsologtostderr: (9.174216803s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-844686
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-844686
E0805 23:29:52.350986 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-844686: (24.823578812s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-844686 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-844686 --wait=true -v=8 --alsologtostderr: (57.337574127s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-844686
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-844686 node delete m03: (4.760975956s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-844686 stop: (23.688983351s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-844686 status: exit status 7 (105.172572ms)

                                                
                                                
-- stdout --
	multinode-844686
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-844686-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-844686 status --alsologtostderr: exit status 7 (85.048915ms)

                                                
                                                
-- stdout --
	multinode-844686
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-844686-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:31:20.137318 1688103 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:31:20.137524 1688103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:31:20.137537 1688103 out.go:304] Setting ErrFile to fd 2...
	I0805 23:31:20.137542 1688103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:31:20.137810 1688103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 23:31:20.138033 1688103 out.go:298] Setting JSON to false
	I0805 23:31:20.138089 1688103 mustload.go:65] Loading cluster: multinode-844686
	I0805 23:31:20.138191 1688103 notify.go:220] Checking for updates...
	I0805 23:31:20.138546 1688103 config.go:182] Loaded profile config "multinode-844686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:31:20.138566 1688103 status.go:255] checking status of multinode-844686 ...
	I0805 23:31:20.139129 1688103 cli_runner.go:164] Run: docker container inspect multinode-844686 --format={{.State.Status}}
	I0805 23:31:20.158479 1688103 status.go:330] multinode-844686 host status = "Stopped" (err=<nil>)
	I0805 23:31:20.158504 1688103 status.go:343] host is not running, skipping remaining checks
	I0805 23:31:20.158513 1688103 status.go:257] multinode-844686 status: &{Name:multinode-844686 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:31:20.158571 1688103 status.go:255] checking status of multinode-844686-m02 ...
	I0805 23:31:20.158885 1688103 cli_runner.go:164] Run: docker container inspect multinode-844686-m02 --format={{.State.Status}}
	I0805 23:31:20.174907 1688103 status.go:330] multinode-844686-m02 host status = "Stopped" (err=<nil>)
	I0805 23:31:20.174932 1688103 status.go:343] host is not running, skipping remaining checks
	I0805 23:31:20.174939 1688103 status.go:257] multinode-844686-m02 status: &{Name:multinode-844686-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-844686 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-844686 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (57.491257928s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-844686 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-844686
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-844686-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-844686-m02 --driver=docker  --container-runtime=crio: exit status 14 (82.789027ms)

                                                
                                                
-- stdout --
	* [multinode-844686-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-844686-m02' is duplicated with machine name 'multinode-844686-m02' in profile 'multinode-844686'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-844686-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-844686-m03 --driver=docker  --container-runtime=crio: (33.371285248s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-844686
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-844686: exit status 80 (326.831055ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-844686 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-844686-m03 already exists in multinode-844686-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-844686-m03
E0805 23:32:53.894943 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-844686-m03: (1.96145815s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.80s)

                                                
                                    
x
+
TestPreload (126.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-773065 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0805 23:33:29.306961 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-773065 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.519092705s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-773065 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-773065 image pull gcr.io/k8s-minikube/busybox: (1.79220768s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-773065
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-773065: (5.807726309s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-773065 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-773065 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.606892427s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-773065 image list
helpers_test.go:175: Cleaning up "test-preload-773065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-773065
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-773065: (2.582086158s)
--- PASS: TestPreload (126.60s)

                                                
                                    
x
+
TestScheduledStopUnix (105.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-189744 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-189744 --memory=2048 --driver=docker  --container-runtime=crio: (29.802734701s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-189744 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-189744 -n scheduled-stop-189744
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-189744 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-189744 --cancel-scheduled
E0805 23:35:56.943502 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-189744 -n scheduled-stop-189744
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-189744
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-189744 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-189744
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-189744: exit status 7 (69.656592ms)

                                                
                                                
-- stdout --
	scheduled-stop-189744
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-189744 -n scheduled-stop-189744
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-189744 -n scheduled-stop-189744: exit status 7 (75.619638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-189744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-189744
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-189744: (4.432558549s)
--- PASS: TestScheduledStopUnix (105.93s)

                                                
                                    
x
+
TestInsufficientStorage (11.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-886761 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-886761 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.826415539s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb43b4d6-c39f-4fb3-b822-c2e68e175d2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-886761] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e64c739-daed-4532-ad95-9ac87164c149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19373"}}
	{"specversion":"1.0","id":"5f8423bc-55de-43a6-8984-e15c48d9d7c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"87b46897-82f7-4852-a2ad-93e473b9ddd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig"}}
	{"specversion":"1.0","id":"0c64c8bd-6039-49ac-b988-5092a2784907","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube"}}
	{"specversion":"1.0","id":"ecc33054-10ac-4997-b0cd-c83ccd9bbd65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8abe4e7d-43af-4102-988b-0274a599258c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50f5da0a-6d28-4caa-a210-084c47d6bfb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ed9493b6-6a3b-4d55-a295-078e0b8790f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"33305a66-b04c-4faf-8ff3-6adea506528d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c80361fb-0428-4a24-afc4-8f41f529ed48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5cf851f3-a301-45e3-bd21-67f5af0d31e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-886761\" primary control-plane node in \"insufficient-storage-886761\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3aa9e0f-e41b-491e-b82d-84a06967b5f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce2152af-77a7-4c1a-86a3-2188e0ac1b09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbbb6b00-9dee-4913-8314-3d91eb2a513e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-886761 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-886761 --output=json --layout=cluster: exit status 7 (293.160743ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-886761","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-886761","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 23:36:59.854473 1705762 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-886761" does not appear in /home/jenkins/minikube-integration/19373-1559727/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-886761 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-886761 --output=json --layout=cluster: exit status 7 (560.852912ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-886761","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-886761","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 23:37:00.411669 1705826 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-886761" does not appear in /home/jenkins/minikube-integration/19373-1559727/kubeconfig
	E0805 23:37:00.427422 1705826 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/insufficient-storage-886761/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-886761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-886761
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-886761: (1.947015179s)
--- PASS: TestInsufficientStorage (11.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1966527522 start -p running-upgrade-615956 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1966527522 start -p running-upgrade-615956 --memory=2200 --vm-driver=docker  --container-runtime=crio: (41.656532605s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-615956 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-615956 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.840894845s)
helpers_test.go:175: Cleaning up "running-upgrade-615956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-615956
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-615956: (2.680055732s)
--- PASS: TestRunningBinaryUpgrade (90.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (139.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-993017 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-993017 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.778608156s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-993017
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-993017: (1.263363821s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-993017 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-993017 status --format={{.Host}}: exit status 7 (93.118229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-993017 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-993017 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.648630791s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-993017 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-993017 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-993017 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (149.603777ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-993017] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-993017
	    minikube start -p kubernetes-upgrade-993017 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9930172 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-993017 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-993017 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-993017 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.233815505s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-993017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-993017
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-993017: (2.214361277s)
--- PASS: TestKubernetesUpgrade (139.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (146.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2516568843 start -p missing-upgrade-755025 --memory=2200 --driver=docker  --container-runtime=crio
E0805 23:38:29.306901 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2516568843 start -p missing-upgrade-755025 --memory=2200 --driver=docker  --container-runtime=crio: (1m8.183178024s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-755025
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-755025: (10.420733368s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-755025
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-755025 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-755025 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.251311888s)
helpers_test.go:175: Cleaning up "missing-upgrade-755025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-755025
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-755025: (2.219430071s)
--- PASS: TestMissingContainerUpgrade (146.85s)

                                                
                                    
x
+
TestPause/serial/Start (68.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-741737 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-741737 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m8.373998351s)
--- PASS: TestPause/serial/Start (68.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117210 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-117210 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (101.668899ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-117210] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117210 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-117210 --driver=docker  --container-runtime=crio: (42.122386424s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-117210 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117210 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-117210 --no-kubernetes --driver=docker  --container-runtime=crio: (4.866941372s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-117210 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-117210 status -o json: exit status 2 (322.047731ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-117210","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-117210
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-117210: (2.025726563s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117210 --no-kubernetes --driver=docker  --container-runtime=crio
E0805 23:37:53.895066 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-117210 --no-kubernetes --driver=docker  --container-runtime=crio: (9.266202208s)
--- PASS: TestNoKubernetes/serial/Start (9.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-117210 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-117210 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.624279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-117210
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-117210: (1.214498469s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117210 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-117210 --driver=docker  --container-runtime=crio: (7.294359524s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-741737 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-741737 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.01836406s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-117210 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-117210 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.692866ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestPause/serial/Pause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-741737 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-741737 --alsologtostderr -v=5: (1.077872426s)
--- PASS: TestPause/serial/Pause (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-741737 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-741737 --output=json --layout=cluster: exit status 2 (379.131577ms)

                                                
                                                
-- stdout --
	{"Name":"pause-741737","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-741737","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.2s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-741737 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-741737 --alsologtostderr -v=5: (1.199009435s)
--- PASS: TestPause/serial/Unpause (1.20s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-741737 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-741737 --alsologtostderr -v=5: (1.692323955s)
--- PASS: TestPause/serial/PauseAgain (1.69s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.9s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-741737 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-741737 --alsologtostderr -v=5: (2.901819098s)
--- PASS: TestPause/serial/DeletePaused (2.90s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-741737
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-741737: exit status 1 (22.817824ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-741737: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1899663024 start -p stopped-upgrade-353060 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1899663024 start -p stopped-upgrade-353060 --memory=2200 --vm-driver=docker  --container-runtime=crio: (51.704909791s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1899663024 -p stopped-upgrade-353060 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1899663024 -p stopped-upgrade-353060 stop: (3.064302803s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-353060 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-353060 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.925920673s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-353060
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-353060: (1.251971032s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-480008 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-480008 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (264.61884ms)

                                                
                                                
-- stdout --
	* [false-480008] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:42:52.824187 1741382 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:42:52.824463 1741382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:42:52.824500 1741382 out.go:304] Setting ErrFile to fd 2...
	I0805 23:42:52.824529 1741382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:42:52.824855 1741382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-1559727/.minikube/bin
	I0805 23:42:52.825411 1741382 out.go:298] Setting JSON to false
	I0805 23:42:52.826521 1741382 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30313,"bootTime":1722871060,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0805 23:42:52.826672 1741382 start.go:139] virtualization:  
	I0805 23:42:52.829543 1741382 out.go:177] * [false-480008] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0805 23:42:52.831540 1741382 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:42:52.831613 1741382 notify.go:220] Checking for updates...
	I0805 23:42:52.834856 1741382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:42:52.837513 1741382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-1559727/kubeconfig
	I0805 23:42:52.839496 1741382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-1559727/.minikube
	I0805 23:42:52.843072 1741382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0805 23:42:52.844954 1741382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:42:52.847427 1741382 config.go:182] Loaded profile config "force-systemd-flag-958329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:42:52.847531 1741382 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:42:52.906316 1741382 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0805 23:42:52.906456 1741382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0805 23:42:52.998419 1741382 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-05 23:42:52.985062206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0805 23:42:52.998541 1741382 docker.go:307] overlay module found
	I0805 23:42:53.000652 1741382 out.go:177] * Using the docker driver based on user configuration
	I0805 23:42:53.002425 1741382 start.go:297] selected driver: docker
	I0805 23:42:53.002441 1741382 start.go:901] validating driver "docker" against <nil>
	I0805 23:42:53.002462 1741382 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:42:53.005900 1741382 out.go:177] 
	W0805 23:42:53.008341 1741382 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0805 23:42:53.011299 1741382 out.go:177] 

                                                
                                                
** /stderr **
E0805 23:42:53.896504 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-480008 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-480008" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-480008

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480008"

                                                
                                                
----------------------- debugLogs end: false-480008 [took: 3.982232514s] --------------------------------
helpers_test.go:175: Cleaning up "false-480008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-480008
--- PASS: TestNetworkPlugins/group/false (4.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (183.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-287333 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0805 23:46:32.351757 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-287333 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m3.383238959s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (183.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-263593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-263593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (1m14.697677921s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-287333 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [44effab8-4b89-42fc-84a5-1712ab14135c] Pending
helpers_test.go:344: "busybox" [44effab8-4b89-42fc-84a5-1712ab14135c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [44effab8-4b89-42fc-84a5-1712ab14135c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003507969s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-287333 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-287333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-287333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.503680787s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-287333 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-287333 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-287333 --alsologtostderr -v=3: (12.567925346s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-287333 -n old-k8s-version-287333
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-287333 -n old-k8s-version-287333: exit status 7 (114.956664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-287333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (134.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-287333 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0805 23:47:53.895045 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:48:29.306724 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-287333 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m14.211876583s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-287333 -n old-k8s-version-287333
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (134.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-263593 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [24415afc-1cce-4dae-b2f0-9681a444ecd7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [24415afc-1cce-4dae-b2f0-9681a444ecd7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003142856s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-263593 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-263593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-263593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.323322175s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-263593 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-263593 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-263593 --alsologtostderr -v=3: (12.42129947s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-263593 -n no-preload-263593
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-263593 -n no-preload-263593: exit status 7 (85.011003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-263593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-263593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-263593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (4m27.797588449s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-263593 -n no-preload-263593
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-qfjf7" [df482849-fd3d-412f-96a6-b28b2d969a96] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004278993s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-qfjf7" [df482849-fd3d-412f-96a6-b28b2d969a96] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004643629s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-287333 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-287333 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-287333 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-287333 -n old-k8s-version-287333
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-287333 -n old-k8s-version-287333: exit status 2 (329.322706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-287333 -n old-k8s-version-287333
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-287333 -n old-k8s-version-287333: exit status 2 (338.506523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-287333 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-287333 -n old-k8s-version-287333
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-287333 -n old-k8s-version-287333
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-821989 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-821989 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m3.610803804s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-821989 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [de33f9ef-24cc-44e2-8c9c-06f82f6720d2] Pending
helpers_test.go:344: "busybox" [de33f9ef-24cc-44e2-8c9c-06f82f6720d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [de33f9ef-24cc-44e2-8c9c-06f82f6720d2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004374601s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-821989 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-821989 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-821989 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-821989 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-821989 --alsologtostderr -v=3: (11.961073227s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-821989 -n embed-certs-821989
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-821989 -n embed-certs-821989: exit status 7 (67.483229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-821989 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-821989 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0805 23:52:28.720377 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:28.725635 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:28.735892 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:28.756140 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:28.796396 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:28.876628 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:29.037199 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:29.358321 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:29.999311 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:31.279930 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:33.840639 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:36.943771 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:52:38.961712 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:49.202577 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:52:53.894755 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:53:09.682808 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-821989 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m26.258504213s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-821989 -n embed-certs-821989
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-np6f4" [c8c8a321-6686-4c01-8382-0856ebff2b5a] Running
E0805 23:53:29.306980 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003565299s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-np6f4" [c8c8a321-6686-4c01-8382-0856ebff2b5a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003795231s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-263593 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-263593 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-263593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-263593 -n no-preload-263593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-263593 -n no-preload-263593: exit status 2 (340.01691ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-263593 -n no-preload-263593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-263593 -n no-preload-263593: exit status 2 (313.304221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-263593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-263593 -n no-preload-263593
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-263593 -n no-preload-263593
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-269224 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0805 23:53:50.643504 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-269224 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m4.929043418s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-269224 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [834e2fa4-0bf4-4892-b11f-6ef402204327] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [834e2fa4-0bf4-4892-b11f-6ef402204327] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005067262s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-269224 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-269224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-269224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022979674s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-269224 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-269224 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-269224 --alsologtostderr -v=3: (11.970575734s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224: exit status 7 (71.882176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-269224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-269224 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0805 23:55:12.563726 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-269224 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m27.644480082s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-jz2vs" [cc297502-ead0-4389-907e-09c692261a00] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003270336s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-jz2vs" [cc297502-ead0-4389-907e-09c692261a00] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003736211s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-821989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-821989 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-821989 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-821989 -n embed-certs-821989
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-821989 -n embed-certs-821989: exit status 2 (316.459111ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-821989 -n embed-certs-821989
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-821989 -n embed-certs-821989: exit status 2 (352.078783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-821989 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-821989 -n embed-certs-821989
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-821989 -n embed-certs-821989
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-857175 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-857175 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (41.503179495s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-857175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-857175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.360358631s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-857175 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-857175 --alsologtostderr -v=3: (1.264246655s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-857175 -n newest-cni-857175
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-857175 -n newest-cni-857175: exit status 7 (69.101808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-857175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-857175 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
E0805 23:57:28.720653 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-857175 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (18.363058929s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-857175 -n newest-cni-857175
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-857175 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-857175 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-857175 -n newest-cni-857175
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-857175 -n newest-cni-857175: exit status 2 (304.695962ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-857175 -n newest-cni-857175
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-857175 -n newest-cni-857175: exit status 2 (323.366966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-857175 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-857175 -n newest-cni-857175
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-857175 -n newest-cni-857175
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0805 23:57:53.894714 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/addons-554168/client.crt: no such file or directory
E0805 23:57:56.404135 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
E0805 23:58:29.306974 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0805 23:58:33.689528 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:33.694777 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:33.705077 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:33.725443 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:33.765726 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:33.846219 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:34.007854 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:34.328602 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:34.968971 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:36.249796 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:38.810709 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
E0805 23:58:43.931738 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m2.681977864s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-480008 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-480008 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7fm47" [706df841-73bd-41a8-bcd3-f56f5f692862] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0805 23:58:54.172840 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7fm47" [706df841-73bd-41a8-bcd3-f56f5f692862] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004944487s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-480008 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m0.7066286s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-b2sp4" [ec0cfa2e-e535-4876-9aea-e084db6d0830] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003883764s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-b2sp4" [ec0cfa2e-e535-4876-9aea-e084db6d0830] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009810016s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-269224 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-269224 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-269224 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224: exit status 2 (326.935099ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224: exit status 2 (379.823876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-269224 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-269224 -n default-k8s-diff-port-269224
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.45s)
E0806 00:04:49.437468 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:49.442754 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:49.453081 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:49.473363 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:49.513629 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:49.594034 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:49.754435 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:50.075489 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:50.716065 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:51.996302 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:54.556790 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:04:59.677075 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:05:09.917856 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:05:10.425558 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/auto-480008/client.crt: no such file or directory
E0806 00:05:22.316637 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:22.321914 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:22.332264 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:22.352627 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:22.392904 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:22.473360 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:22.633692 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:22.954322 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:23.595224 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:24.875697 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:27.436446 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
E0806 00:05:30.398988 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/default-k8s-diff-port-269224/client.crt: no such file or directory
E0806 00:05:32.556968 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m21.863031279s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6wcxk" [54a809b0-9a07-4cf0-bc9d-698a6280b7f1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003885198s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-480008 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-480008 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bppf9" [7bf28868-72c7-44d2-91e8-46a148829ff8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bppf9" [7bf28868-72c7-44d2-91e8-46a148829ff8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004457319s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-480008 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0806 00:01:17.534074 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.72856123s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-x4njc" [077786cf-4fdb-45ea-893a-72d81c5ac4b8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.011088164s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-480008 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-480008 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8hdh9" [731e3af6-eaed-427a-baaa-98eb689ee244] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8hdh9" [731e3af6-eaed-427a-baaa-98eb689ee244] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004298216s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-480008 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m29.022650444s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-480008 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-480008 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2q6ht" [f3e96c7a-6d72-4185-92f1-2c5d9f2ee83e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2q6ht" [f3e96c7a-6d72-4185-92f1-2c5d9f2ee83e] Running
E0806 00:02:28.719713 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/old-k8s-version-287333/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005083508s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-480008 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0806 00:03:12.352722 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0806 00:03:29.306781 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/functional-220049/client.crt: no such file or directory
E0806 00:03:33.689296 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.205186628s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-480008 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-480008 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cwxqw" [e68741f1-885a-4732-86f1-ddb119828d54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cwxqw" [e68741f1-885a-4732-86f1-ddb119828d54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004597581s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-480008 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0806 00:03:48.498409 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/auto-480008/client.crt: no such file or directory
E0806 00:03:48.503647 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/auto-480008/client.crt: no such file or directory
E0806 00:03:48.513894 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/auto-480008/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0806 00:03:48.536878 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/auto-480008/client.crt: no such file or directory
E0806 00:03:48.577301 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/auto-480008/client.crt: no such file or directory
E0806 00:03:48.657713 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/auto-480008/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-frvn9" [cfe7df81-fb2b-4cc8-9841-39aaef8e54c3] Running
E0806 00:03:58.742182 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/auto-480008/client.crt: no such file or directory
E0806 00:04:01.374931 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/no-preload-263593/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005435067s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-480008 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-480008 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tm95t" [b7aee0ad-ec25-4f37-bdcd-5622d3b1bff9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tm95t" [b7aee0ad-ec25-4f37-bdcd-5622d3b1bff9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003405745s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-480008 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m30.704806064s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-480008 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-480008 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-480008 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fc89f" [03674062-1f4c-4de7-a3c3-215cb28c2834] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0806 00:05:42.797767 1565121 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-1559727/.minikube/profiles/kindnet-480008/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-fc89f" [03674062-1f4c-4de7-a3c3-215cb28c2834] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004643769s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-480008 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-480008 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (33/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-565200 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-565200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-565200
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-284355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-284355
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-480008 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-480008" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-480008

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480008"

                                                
                                                
----------------------- debugLogs end: kubenet-480008 [took: 4.182197354s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-480008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-480008
--- SKIP: TestNetworkPlugins/group/kubenet (4.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-480008 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-480008" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-480008

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-480008" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480008"

                                                
                                                
----------------------- debugLogs end: cilium-480008 [took: 5.16361852s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-480008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-480008
--- SKIP: TestNetworkPlugins/group/cilium (5.40s)

                                                
                                    
Copied to clipboard