Test Report: Docker_Linux_crio_arm64 19283

                    
                      8d2418a61c606cc3028c5bf9242bf095ec458362:2024-07-17:35383
                    
                

Test fail (2/336)

Order failed test Duration
39 TestAddons/parallel/Ingress 151.91
41 TestAddons/parallel/MetricsServer 319.48
x
+
TestAddons/parallel/Ingress (151.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-747597 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-747597 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-747597 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5148535d-e048-41d1-bf66-c8aa5e381f36] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5148535d-e048-41d1-bf66-c8aa5e381f36] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003239834s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-747597 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.830520946s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-747597 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-747597 addons disable ingress-dns --alsologtostderr -v=1: (1.412092392s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-747597 addons disable ingress --alsologtostderr -v=1: (7.760587649s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-747597
helpers_test.go:235: (dbg) docker inspect addons-747597:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455",
	        "Created": "2024-07-17T19:17:46.119548905Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 596661,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T19:17:46.264652095Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:476b38520acaa45848ac08864bd6ca4a7124b7e691863e24807ecda76b00d113",
	        "ResolvConfPath": "/var/lib/docker/containers/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455/hostname",
	        "HostsPath": "/var/lib/docker/containers/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455/hosts",
	        "LogPath": "/var/lib/docker/containers/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455-json.log",
	        "Name": "/addons-747597",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-747597:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-747597",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a6321df0310d2b7617b61367a822b507dcb7b6f24a118d9134a5c0f737bcdf3b-init/diff:/var/lib/docker/overlay2/565efae8277f893e1a3772eb51129c6122836d34f0368ed890f207f355d67a18/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a6321df0310d2b7617b61367a822b507dcb7b6f24a118d9134a5c0f737bcdf3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a6321df0310d2b7617b61367a822b507dcb7b6f24a118d9134a5c0f737bcdf3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a6321df0310d2b7617b61367a822b507dcb7b6f24a118d9134a5c0f737bcdf3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-747597",
	                "Source": "/var/lib/docker/volumes/addons-747597/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-747597",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-747597",
	                "name.minikube.sigs.k8s.io": "addons-747597",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d626e00ee6a37211769924797a6438dbe14f526af44275b8e7c651b68301959a",
	            "SandboxKey": "/var/run/docker/netns/d626e00ee6a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-747597": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "59fab1ccd3de0618adab634ca644a1d762012d21123cf13746cc98801bca43f9",
	                    "EndpointID": "cf04ef327c7a1daf7bc36516cfa460ddb82b3b0cfe0f096f72b636f0283866ba",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-747597",
	                        "dda8db92681d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-747597 -n addons-747597
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-747597 logs -n 25: (1.467471845s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-902211                                                                     | download-only-902211   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| delete  | -p download-only-186638                                                                     | download-only-186638   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| delete  | -p download-only-639410                                                                     | download-only-639410   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| delete  | -p download-only-902211                                                                     | download-only-902211   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | --download-only -p                                                                          | download-docker-114745 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | download-docker-114745                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-114745                                                                   | download-docker-114745 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-794463   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | binary-mirror-794463                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35105                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-794463                                                                     | binary-mirror-794463   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| addons  | enable dashboard -p                                                                         | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | addons-747597                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | addons-747597                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-747597 --wait=true                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | -p addons-747597                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-747597 ip                                                                            | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	| addons  | addons-747597 addons disable                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | -p addons-747597                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-747597 ssh cat                                                                       | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | /opt/local-path-provisioner/pvc-e6f3f4fe-8b6e-4e46-a13c-533c45ae5ad4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-747597 addons disable                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | addons-747597                                                                               |                        |         |         |                     |                     |
	| addons  | addons-747597 addons                                                                        | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:22 UTC | 17 Jul 24 19:22 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-747597 addons                                                                        | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:22 UTC | 17 Jul 24 19:22 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:22 UTC | 17 Jul 24 19:22 UTC |
	|         | addons-747597                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-747597 ssh curl -s                                                                   | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:22 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-747597 ip                                                                            | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| addons  | addons-747597 addons disable                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-747597 addons disable                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:17:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:17:21.718625  596166 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:17:21.718784  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:21.718801  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:17:21.718808  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:21.719046  596166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:17:21.719520  596166 out.go:298] Setting JSON to false
	I0717 19:17:21.720413  596166 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10785,"bootTime":1721233057,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0717 19:17:21.720493  596166 start.go:139] virtualization:  
	I0717 19:17:21.723241  596166 out.go:177] * [addons-747597] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 19:17:21.725161  596166 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 19:17:21.725228  596166 notify.go:220] Checking for updates...
	I0717 19:17:21.729208  596166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:17:21.731175  596166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:17:21.732979  596166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	I0717 19:17:21.734754  596166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 19:17:21.737008  596166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:17:21.738955  596166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:17:21.760370  596166 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 19:17:21.760497  596166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:21.824655  596166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 19:17:21.815605673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:17:21.824801  596166 docker.go:307] overlay module found
	I0717 19:17:21.828054  596166 out.go:177] * Using the docker driver based on user configuration
	I0717 19:17:21.829931  596166 start.go:297] selected driver: docker
	I0717 19:17:21.829950  596166 start.go:901] validating driver "docker" against <nil>
	I0717 19:17:21.829965  596166 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:17:21.830594  596166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:21.879901  596166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 19:17:21.871112628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:17:21.880062  596166 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 19:17:21.880293  596166 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:17:21.882569  596166 out.go:177] * Using Docker driver with root privileges
	I0717 19:17:21.884990  596166 cni.go:84] Creating CNI manager for ""
	I0717 19:17:21.885015  596166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:21.885030  596166 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 19:17:21.885135  596166 start.go:340] cluster config:
	{Name:addons-747597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:17:21.888891  596166 out.go:177] * Starting "addons-747597" primary control-plane node in "addons-747597" cluster
	I0717 19:17:21.890888  596166 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 19:17:21.892749  596166 out.go:177] * Pulling base image v0.0.44-1721146479-19264 ...
	I0717 19:17:21.894711  596166 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:17:21.894757  596166 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0717 19:17:21.894772  596166 cache.go:56] Caching tarball of preloaded images
	I0717 19:17:21.894797  596166 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 19:17:21.894853  596166 preload.go:172] Found /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0717 19:17:21.894863  596166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 19:17:21.895202  596166 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/config.json ...
	I0717 19:17:21.895232  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/config.json: {Name:mk00e7f571c60a530945c6cef35ba32aa47eea2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:21.913432  596166 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 19:17:21.913582  596166 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 19:17:21.913603  596166 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 19:17:21.913608  596166 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 19:17:21.913616  596166 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	I0717 19:17:21.913622  596166 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from local cache
	I0717 19:17:38.578202  596166 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from cached tarball
	I0717 19:17:38.578236  596166 cache.go:194] Successfully downloaded all kic artifacts
	I0717 19:17:38.578279  596166 start.go:360] acquireMachinesLock for addons-747597: {Name:mkfb0f489a4eb78a4e21cfb654d8f2daf2a9477b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:38.578778  596166 start.go:364] duration metric: took 462.761µs to acquireMachinesLock for "addons-747597"
	I0717 19:17:38.578818  596166 start.go:93] Provisioning new machine with config: &{Name:addons-747597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:17:38.578910  596166 start.go:125] createHost starting for "" (driver="docker")
	I0717 19:17:38.581230  596166 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 19:17:38.581524  596166 start.go:159] libmachine.API.Create for "addons-747597" (driver="docker")
	I0717 19:17:38.581566  596166 client.go:168] LocalClient.Create starting
	I0717 19:17:38.581714  596166 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem
	I0717 19:17:39.073257  596166 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/cert.pem
	I0717 19:17:39.610837  596166 cli_runner.go:164] Run: docker network inspect addons-747597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 19:17:39.625891  596166 cli_runner.go:211] docker network inspect addons-747597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 19:17:39.625982  596166 network_create.go:284] running [docker network inspect addons-747597] to gather additional debugging logs...
	I0717 19:17:39.626004  596166 cli_runner.go:164] Run: docker network inspect addons-747597
	W0717 19:17:39.641731  596166 cli_runner.go:211] docker network inspect addons-747597 returned with exit code 1
	I0717 19:17:39.641761  596166 network_create.go:287] error running [docker network inspect addons-747597]: docker network inspect addons-747597: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-747597 not found
	I0717 19:17:39.641774  596166 network_create.go:289] output of [docker network inspect addons-747597]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-747597 not found
	
	** /stderr **
	I0717 19:17:39.641871  596166 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:39.657575  596166 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000478780}
	I0717 19:17:39.657618  596166 network_create.go:124] attempt to create docker network addons-747597 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 19:17:39.657674  596166 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-747597 addons-747597
	I0717 19:17:39.726631  596166 network_create.go:108] docker network addons-747597 192.168.49.0/24 created
	I0717 19:17:39.726662  596166 kic.go:121] calculated static IP "192.168.49.2" for the "addons-747597" container
	I0717 19:17:39.726748  596166 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 19:17:39.739891  596166 cli_runner.go:164] Run: docker volume create addons-747597 --label name.minikube.sigs.k8s.io=addons-747597 --label created_by.minikube.sigs.k8s.io=true
	I0717 19:17:39.756280  596166 oci.go:103] Successfully created a docker volume addons-747597
	I0717 19:17:39.756372  596166 cli_runner.go:164] Run: docker run --rm --name addons-747597-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747597 --entrypoint /usr/bin/test -v addons-747597:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib
	I0717 19:17:41.831592  596166 cli_runner.go:217] Completed: docker run --rm --name addons-747597-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747597 --entrypoint /usr/bin/test -v addons-747597:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib: (2.075178303s)
	I0717 19:17:41.831619  596166 oci.go:107] Successfully prepared a docker volume addons-747597
	I0717 19:17:41.831645  596166 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:17:41.831721  596166 kic.go:194] Starting extracting preloaded images to volume ...
	I0717 19:17:41.831819  596166 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747597:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 19:17:46.046721  596166 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747597:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir: (4.214853601s)
	I0717 19:17:46.046755  596166 kic.go:203] duration metric: took 4.21508633s to extract preloaded images to volume ...
	W0717 19:17:46.046911  596166 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 19:17:46.047031  596166 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 19:17:46.106111  596166 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-747597 --name addons-747597 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747597 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-747597 --network addons-747597 --ip 192.168.49.2 --volume addons-747597:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e
	I0717 19:17:46.423019  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Running}}
	I0717 19:17:46.440159  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:17:46.461943  596166 cli_runner.go:164] Run: docker exec addons-747597 stat /var/lib/dpkg/alternatives/iptables
	I0717 19:17:46.522898  596166 oci.go:144] the created container "addons-747597" has a running status.
	I0717 19:17:46.522934  596166 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa...
	I0717 19:17:46.884540  596166 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 19:17:46.912990  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:17:46.936762  596166 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 19:17:46.936784  596166 kic_runner.go:114] Args: [docker exec --privileged addons-747597 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 19:17:47.033515  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:17:47.062218  596166 machine.go:94] provisionDockerMachine start ...
	I0717 19:17:47.062320  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:47.092868  596166 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:47.093156  596166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0717 19:17:47.093172  596166 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:17:47.267483  596166 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747597
	
	I0717 19:17:47.267508  596166 ubuntu.go:169] provisioning hostname "addons-747597"
	I0717 19:17:47.267579  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:47.284221  596166 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:47.284487  596166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0717 19:17:47.284504  596166 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-747597 && echo "addons-747597" | sudo tee /etc/hostname
	I0717 19:17:47.437281  596166 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747597
	
	I0717 19:17:47.437445  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:47.457540  596166 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:47.457805  596166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0717 19:17:47.457822  596166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-747597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-747597/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-747597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:17:47.599468  596166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:17:47.599537  596166 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19283-589755/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-589755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-589755/.minikube}
	I0717 19:17:47.599574  596166 ubuntu.go:177] setting up certificates
	I0717 19:17:47.599618  596166 provision.go:84] configureAuth start
	I0717 19:17:47.599704  596166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747597
	I0717 19:17:47.616323  596166 provision.go:143] copyHostCerts
	I0717 19:17:47.616412  596166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-589755/.minikube/ca.pem (1082 bytes)
	I0717 19:17:47.616534  596166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-589755/.minikube/cert.pem (1123 bytes)
	I0717 19:17:47.616594  596166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-589755/.minikube/key.pem (1679 bytes)
	I0717 19:17:47.616645  596166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-589755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca-key.pem org=jenkins.addons-747597 san=[127.0.0.1 192.168.49.2 addons-747597 localhost minikube]
	I0717 19:17:47.980472  596166 provision.go:177] copyRemoteCerts
	I0717 19:17:47.980554  596166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:17:47.980597  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:47.998053  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.098632  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:17:48.125050  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 19:17:48.149791  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:17:48.174060  596166 provision.go:87] duration metric: took 574.408495ms to configureAuth
	I0717 19:17:48.174086  596166 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:17:48.174282  596166 config.go:182] Loaded profile config "addons-747597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:17:48.174381  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.190740  596166 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:48.190986  596166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0717 19:17:48.191004  596166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:48.434333  596166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:48.434354  596166 machine.go:97] duration metric: took 1.372115681s to provisionDockerMachine
	I0717 19:17:48.434365  596166 client.go:171] duration metric: took 9.85278888s to LocalClient.Create
	I0717 19:17:48.434377  596166 start.go:167] duration metric: took 9.852854545s to libmachine.API.Create "addons-747597"
	I0717 19:17:48.434385  596166 start.go:293] postStartSetup for "addons-747597" (driver="docker")
	I0717 19:17:48.434396  596166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:48.434463  596166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:48.434528  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.451341  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.549041  596166 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:48.552325  596166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:48.552361  596166 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:48.552372  596166 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:48.552379  596166 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 19:17:48.552390  596166 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-589755/.minikube/addons for local assets ...
	I0717 19:17:48.552462  596166 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-589755/.minikube/files for local assets ...
	I0717 19:17:48.552489  596166 start.go:296] duration metric: took 118.098648ms for postStartSetup
	I0717 19:17:48.552811  596166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747597
	I0717 19:17:48.571281  596166 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/config.json ...
	I0717 19:17:48.571600  596166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:48.571668  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.587100  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.684072  596166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:48.688287  596166 start.go:128] duration metric: took 10.109360386s to createHost
	I0717 19:17:48.688314  596166 start.go:83] releasing machines lock for "addons-747597", held for 10.109516587s
	I0717 19:17:48.688386  596166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747597
	I0717 19:17:48.704731  596166 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:48.704788  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.705043  596166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:48.705107  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.722875  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.726364  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.957666  596166 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:48.961976  596166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:49.100914  596166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:49.105049  596166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:49.128053  596166 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:49.128167  596166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:49.163861  596166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 19:17:49.163886  596166 start.go:495] detecting cgroup driver to use...
	I0717 19:17:49.163920  596166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:49.163971  596166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:49.179956  596166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:49.192127  596166 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:17:49.192240  596166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:49.206596  596166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:49.221085  596166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:49.304884  596166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:49.400214  596166 docker.go:233] disabling docker service ...
	I0717 19:17:49.400322  596166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:49.421276  596166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:49.433421  596166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:49.516786  596166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:49.617837  596166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:49.630131  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:49.648908  596166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:17:49.649005  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.660263  596166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:49.660396  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.670375  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.680512  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.691188  596166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:49.700910  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.711310  596166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.727441  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.738548  596166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:49.748376  596166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:49.757501  596166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:49.848042  596166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:49.962286  596166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:49.962397  596166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:49.967033  596166 start.go:563] Will wait 60s for crictl version
	I0717 19:17:49.967119  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:17:49.970262  596166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:50.016155  596166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:50.016305  596166 ssh_runner.go:195] Run: crio --version
	I0717 19:17:50.056016  596166 ssh_runner.go:195] Run: crio --version
	I0717 19:17:50.098760  596166 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0717 19:17:50.100943  596166 cli_runner.go:164] Run: docker network inspect addons-747597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:50.117732  596166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:50.121543  596166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:50.132878  596166 kubeadm.go:883] updating cluster {Name:addons-747597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:17:50.133005  596166 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:17:50.133072  596166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:50.210954  596166 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:17:50.210975  596166 crio.go:433] Images already preloaded, skipping extraction
	I0717 19:17:50.211034  596166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:50.247025  596166 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:17:50.247050  596166 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:17:50.247059  596166 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0717 19:17:50.247153  596166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-747597 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:17:50.247236  596166 ssh_runner.go:195] Run: crio config
	I0717 19:17:50.311123  596166 cni.go:84] Creating CNI manager for ""
	I0717 19:17:50.311156  596166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:50.311168  596166 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:17:50.311207  596166 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-747597 NodeName:addons-747597 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:17:50.311420  596166 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-747597"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:17:50.311513  596166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:17:50.320537  596166 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:17:50.320616  596166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:17:50.329154  596166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0717 19:17:50.346703  596166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:17:50.364103  596166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0717 19:17:50.382440  596166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:17:50.385636  596166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:50.395914  596166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:50.483524  596166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:17:50.497244  596166 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597 for IP: 192.168.49.2
	I0717 19:17:50.497307  596166 certs.go:194] generating shared ca certs ...
	I0717 19:17:50.497338  596166 certs.go:226] acquiring lock for ca certs: {Name:mkc7f7593d6d49a6ae6b1662b77cfee02ea809e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:50.497897  596166 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-589755/.minikube/ca.key
	I0717 19:17:50.833850  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt ...
	I0717 19:17:50.833890  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt: {Name:mka5f97aa1d51e6f0603d75c5f9a2b330dc025e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:50.834786  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/ca.key ...
	I0717 19:17:50.834803  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/ca.key: {Name:mkf4b159ab3cd3d5e3d249a2fff3bc33a90d072b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:50.835245  596166 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.key
	I0717 19:17:51.293004  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.crt ...
	I0717 19:17:51.293034  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.crt: {Name:mke165fa6523e843211ded021898033e2404971f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:51.293215  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.key ...
	I0717 19:17:51.293227  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.key: {Name:mkccbf8c4d24d8d21caa3e31ecf8f6434f64f5a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:51.293309  596166 certs.go:256] generating profile certs ...
	I0717 19:17:51.293369  596166 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.key
	I0717 19:17:51.293387  596166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt with IP's: []
	I0717 19:17:52.093747  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt ...
	I0717 19:17:52.093826  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: {Name:mk57e121424289d3fe721af9c3e61bbb5d304f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.094656  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.key ...
	I0717 19:17:52.094707  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.key: {Name:mk1cd8ec5af29b222ec8b05308a6edb27d080927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.095303  596166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key.01a7d43c
	I0717 19:17:52.095359  596166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt.01a7d43c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0717 19:17:52.650684  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt.01a7d43c ...
	I0717 19:17:52.650767  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt.01a7d43c: {Name:mka4ee7d76916177a0049e09d0b7e9952971bef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.651405  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key.01a7d43c ...
	I0717 19:17:52.651453  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key.01a7d43c: {Name:mk7b05f553b03554eebba85a801f758f4511ec95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.651598  596166 certs.go:381] copying /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt.01a7d43c -> /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt
	I0717 19:17:52.651736  596166 certs.go:385] copying /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key.01a7d43c -> /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key
	I0717 19:17:52.651865  596166 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.key
	I0717 19:17:52.651906  596166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.crt with IP's: []
	I0717 19:17:52.809070  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.crt ...
	I0717 19:17:52.809144  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.crt: {Name:mk7f7c357585a783a146f3fa02fe902a6a53dd99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.809358  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.key ...
	I0717 19:17:52.809403  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.key: {Name:mk893a3c07b663871f150545a797c3bccf86b1e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.809652  596166 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:17:52.809732  596166 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:17:52.809793  596166 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:17:52.809842  596166 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/key.pem (1679 bytes)
	I0717 19:17:52.810599  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:17:52.855451  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:17:52.900836  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:17:52.925071  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:17:52.948565  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 19:17:52.972384  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:17:52.996470  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:17:53.023900  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:17:53.048535  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:17:53.073058  596166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:17:53.091329  596166 ssh_runner.go:195] Run: openssl version
	I0717 19:17:53.097560  596166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:17:53.107235  596166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:53.110786  596166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:17 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:53.110898  596166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:53.117930  596166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:17:53.127690  596166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:17:53.131093  596166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 19:17:53.131170  596166 kubeadm.go:392] StartCluster: {Name:addons-747597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:17:53.131263  596166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:17:53.131330  596166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:53.169701  596166 cri.go:89] found id: ""
	I0717 19:17:53.169809  596166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:17:53.178427  596166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:53.187328  596166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0717 19:17:53.187433  596166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:17:53.196428  596166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:17:53.196447  596166 kubeadm.go:157] found existing configuration files:
	
	I0717 19:17:53.196526  596166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:17:53.205320  596166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:17:53.205388  596166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:17:53.213593  596166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:17:53.222394  596166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:17:53.222482  596166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:17:53.230790  596166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:17:53.239388  596166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:17:53.239449  596166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:17:53.247823  596166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:17:53.256552  596166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:17:53.256644  596166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:17:53.264981  596166 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 19:17:53.308984  596166 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:17:53.309249  596166 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:17:53.349570  596166 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0717 19:17:53.349691  596166 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1064-aws
	I0717 19:17:53.349770  596166 kubeadm.go:310] OS: Linux
	I0717 19:17:53.349843  596166 kubeadm.go:310] CGROUPS_CPU: enabled
	I0717 19:17:53.349913  596166 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0717 19:17:53.349994  596166 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0717 19:17:53.350060  596166 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0717 19:17:53.350121  596166 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0717 19:17:53.350176  596166 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0717 19:17:53.350223  596166 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0717 19:17:53.350274  596166 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0717 19:17:53.350323  596166 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0717 19:17:53.417468  596166 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:17:53.417764  596166 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:17:53.417908  596166 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:17:53.663828  596166 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:17:53.666617  596166 out.go:204]   - Generating certificates and keys ...
	I0717 19:17:53.666749  596166 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:17:53.666829  596166 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:17:54.081589  596166 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:17:54.666824  596166 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:17:55.066585  596166 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:17:55.329265  596166 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 19:17:56.287066  596166 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 19:17:56.287286  596166 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-747597 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 19:17:56.688200  596166 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 19:17:56.688568  596166 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-747597 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 19:17:56.887065  596166 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:17:57.566148  596166 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:17:58.037853  596166 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 19:17:58.038044  596166 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:17:58.402627  596166 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:17:59.501928  596166 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:17:59.718924  596166 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:18:00.454091  596166 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:18:00.677742  596166 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:18:00.678525  596166 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:18:00.683209  596166 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:18:00.685732  596166 out.go:204]   - Booting up control plane ...
	I0717 19:18:00.685850  596166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:18:00.685930  596166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:18:00.686717  596166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:18:00.697193  596166 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:18:00.698347  596166 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:18:00.698400  596166 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:18:00.790178  596166 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:18:00.790287  596166 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:18:02.792048  596166 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001914624s
	I0717 19:18:02.792140  596166 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:18:09.293554  596166 kubeadm.go:310] [api-check] The API server is healthy after 6.501726692s
	I0717 19:18:09.318470  596166 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:18:09.336379  596166 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:18:09.374182  596166 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:18:09.374377  596166 kubeadm.go:310] [mark-control-plane] Marking the node addons-747597 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:18:09.390994  596166 kubeadm.go:310] [bootstrap-token] Using token: hqg7j9.p48nu7eegj1iucst
	I0717 19:18:09.393145  596166 out.go:204]   - Configuring RBAC rules ...
	I0717 19:18:09.393293  596166 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:18:09.409913  596166 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:18:09.419668  596166 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:18:09.423609  596166 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:18:09.427647  596166 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:18:09.431924  596166 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:18:09.700648  596166 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:18:10.158526  596166 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:18:10.700600  596166 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:18:10.701747  596166 kubeadm.go:310] 
	I0717 19:18:10.701821  596166 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:18:10.701834  596166 kubeadm.go:310] 
	I0717 19:18:10.701911  596166 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:18:10.701922  596166 kubeadm.go:310] 
	I0717 19:18:10.701948  596166 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:18:10.702008  596166 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:18:10.702059  596166 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:18:10.702067  596166 kubeadm.go:310] 
	I0717 19:18:10.702119  596166 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:18:10.702127  596166 kubeadm.go:310] 
	I0717 19:18:10.702172  596166 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:18:10.702180  596166 kubeadm.go:310] 
	I0717 19:18:10.702230  596166 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:18:10.702305  596166 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:18:10.702375  596166 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:18:10.702382  596166 kubeadm.go:310] 
	I0717 19:18:10.702464  596166 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:18:10.702559  596166 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:18:10.702567  596166 kubeadm.go:310] 
	I0717 19:18:10.702653  596166 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hqg7j9.p48nu7eegj1iucst \
	I0717 19:18:10.702754  596166 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:92bc4c9c8cac954f78c64a34e7c101c21493fd8a72d692c72f057161814bfde5 \
	I0717 19:18:10.702777  596166 kubeadm.go:310] 	--control-plane 
	I0717 19:18:10.702787  596166 kubeadm.go:310] 
	I0717 19:18:10.702869  596166 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:18:10.702876  596166 kubeadm.go:310] 
	I0717 19:18:10.702956  596166 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hqg7j9.p48nu7eegj1iucst \
	I0717 19:18:10.703056  596166 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:92bc4c9c8cac954f78c64a34e7c101c21493fd8a72d692c72f057161814bfde5 
	I0717 19:18:10.706386  596166 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
	I0717 19:18:10.706555  596166 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:18:10.706580  596166 cni.go:84] Creating CNI manager for ""
	I0717 19:18:10.706604  596166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:18:10.708935  596166 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 19:18:10.710892  596166 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:18:10.714420  596166 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 19:18:10.714439  596166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:18:10.732799  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:18:11.032413  596166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:18:11.032495  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:11.032557  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-747597 minikube.k8s.io/updated_at=2024_07_17T19_18_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=addons-747597 minikube.k8s.io/primary=true
	I0717 19:18:11.190016  596166 ops.go:34] apiserver oom_adj: -16
	I0717 19:18:11.190140  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:11.690887  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:12.190903  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:12.690943  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:13.190838  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:13.690604  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:14.191111  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:14.690331  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:15.191243  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:15.690974  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:16.190522  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:16.691212  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:17.190699  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:17.690273  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:18.190682  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:18.690278  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:19.190921  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:19.691006  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:20.190763  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:20.690799  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:21.190936  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:21.690283  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:22.190356  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:22.690898  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:23.190974  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:23.690729  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:23.847573  596166 kubeadm.go:1113] duration metric: took 12.81514787s to wait for elevateKubeSystemPrivileges
	I0717 19:18:23.847603  596166 kubeadm.go:394] duration metric: took 30.716463909s to StartCluster
	I0717 19:18:23.847622  596166 settings.go:142] acquiring lock: {Name:mkb34a92534e6ebb88b1dc61f5cef4e8adaa41ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:18:23.848429  596166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:18:23.848910  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/kubeconfig: {Name:mk6ca856576f3a45e2fc0d3c3f561dd766d29da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:18:23.849112  596166 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:18:23.849210  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:18:23.849468  596166 config.go:182] Loaded profile config "addons-747597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:18:23.849500  596166 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 19:18:23.849594  596166 addons.go:69] Setting yakd=true in profile "addons-747597"
	I0717 19:18:23.849622  596166 addons.go:234] Setting addon yakd=true in "addons-747597"
	I0717 19:18:23.849649  596166 addons.go:69] Setting cloud-spanner=true in profile "addons-747597"
	I0717 19:18:23.849706  596166 addons.go:234] Setting addon cloud-spanner=true in "addons-747597"
	I0717 19:18:23.849758  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.849767  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.850209  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.850333  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.849621  596166 addons.go:69] Setting ingress=true in profile "addons-747597"
	I0717 19:18:23.850771  596166 addons.go:234] Setting addon ingress=true in "addons-747597"
	I0717 19:18:23.850810  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.851215  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.852091  596166 out.go:177] * Verifying Kubernetes components...
	I0717 19:18:23.852278  596166 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-747597"
	I0717 19:18:23.852349  596166 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-747597"
	I0717 19:18:23.852380  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.852789  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.853937  596166 addons.go:69] Setting default-storageclass=true in profile "addons-747597"
	I0717 19:18:23.853981  596166 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-747597"
	I0717 19:18:23.854261  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.863417  596166 addons.go:69] Setting gcp-auth=true in profile "addons-747597"
	I0717 19:18:23.863479  596166 mustload.go:65] Loading cluster: addons-747597
	I0717 19:18:23.863664  596166 config.go:182] Loaded profile config "addons-747597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:18:23.863908  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.865761  596166 addons.go:69] Setting ingress-dns=true in profile "addons-747597"
	I0717 19:18:23.865799  596166 addons.go:234] Setting addon ingress-dns=true in "addons-747597"
	I0717 19:18:23.865852  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.866249  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.870246  596166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:18:23.875944  596166 addons.go:69] Setting inspektor-gadget=true in profile "addons-747597"
	I0717 19:18:23.875984  596166 addons.go:234] Setting addon inspektor-gadget=true in "addons-747597"
	I0717 19:18:23.876022  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.876459  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.890219  596166 addons.go:69] Setting metrics-server=true in profile "addons-747597"
	I0717 19:18:23.890258  596166 addons.go:234] Setting addon metrics-server=true in "addons-747597"
	I0717 19:18:23.890293  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.890752  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.908403  596166 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-747597"
	I0717 19:18:23.908446  596166 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-747597"
	I0717 19:18:23.908490  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.908958  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.927535  596166 addons.go:69] Setting volcano=true in profile "addons-747597"
	I0717 19:18:23.927632  596166 addons.go:234] Setting addon volcano=true in "addons-747597"
	I0717 19:18:23.927702  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.936346  596166 addons.go:69] Setting volumesnapshots=true in profile "addons-747597"
	I0717 19:18:23.936394  596166 addons.go:234] Setting addon volumesnapshots=true in "addons-747597"
	I0717 19:18:23.936433  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.936863  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.959532  596166 addons.go:69] Setting registry=true in profile "addons-747597"
	I0717 19:18:23.960164  596166 addons.go:234] Setting addon registry=true in "addons-747597"
	I0717 19:18:23.960235  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.960723  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.004303  596166 addons.go:69] Setting storage-provisioner=true in profile "addons-747597"
	I0717 19:18:24.004401  596166 addons.go:234] Setting addon storage-provisioner=true in "addons-747597"
	I0717 19:18:24.004473  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:24.004920  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.004999  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.011484  596166 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-747597"
	I0717 19:18:24.024247  596166 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-747597"
	I0717 19:18:24.024609  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.013508  596166 addons.go:234] Setting addon default-storageclass=true in "addons-747597"
	I0717 19:18:24.033026  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:24.033501  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.071255  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 19:18:24.075506  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 19:18:24.077335  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 19:18:24.100228  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 19:18:24.100477  596166 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 19:18:24.102678  596166 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 19:18:24.104537  596166 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 19:18:24.104561  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 19:18:24.104628  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.108885  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 19:18:24.109302  596166 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 19:18:24.111731  596166 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 19:18:24.111757  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 19:18:24.111827  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.120101  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:24.127480  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 19:18:24.127926  596166 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 19:18:24.102689  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 19:18:24.128733  596166 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 19:18:24.128804  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.143190  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 19:18:24.144993  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 19:18:24.149191  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 19:18:24.149222  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 19:18:24.149303  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.166296  596166 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 19:18:24.166359  596166 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 19:18:24.170293  596166 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 19:18:24.173677  596166 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 19:18:24.176162  596166 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 19:18:24.176186  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 19:18:24.176259  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.180843  596166 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 19:18:24.180865  596166 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 19:18:24.180934  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.203016  596166 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 19:18:24.203145  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 19:18:24.203915  596166 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:18:24.203942  596166 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:18:24.204007  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.204868  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 19:18:24.204882  596166 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 19:18:24.204940  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.229290  596166 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 19:18:24.235573  596166 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 19:18:24.235605  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 19:18:24.235690  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.238703  596166 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 19:18:24.242442  596166 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-747597"
	I0717 19:18:24.242484  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:24.243258  596166 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 19:18:24.243277  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0717 19:18:24.243339  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	W0717 19:18:24.253831  596166 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 19:18:24.256300  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.272506  596166 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:18:24.272525  596166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:18:24.272594  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.283501  596166 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:18:24.289858  596166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:18:24.290309  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 19:18:24.295509  596166 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:18:24.295534  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:18:24.295601  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.312938  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.386237  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.409054  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.419582  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.427687  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.431474  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.431927  596166 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 19:18:24.434160  596166 out.go:177]   - Using image docker.io/busybox:stable
	I0717 19:18:24.436392  596166 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 19:18:24.436414  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 19:18:24.436478  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.467527  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.476494  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.477234  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.487680  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.491452  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.493970  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.508461  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.866387  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 19:18:24.869296  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 19:18:24.892469  596166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:18:24.892539  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 19:18:24.900822  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 19:18:24.925844  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 19:18:24.925916  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 19:18:24.935864  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 19:18:24.935945  596166 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 19:18:24.946496  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:18:24.951851  596166 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 19:18:24.951927  596166 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 19:18:24.955283  596166 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 19:18:24.955372  596166 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 19:18:24.993903  596166 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 19:18:24.993989  596166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 19:18:24.999205  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 19:18:25.005110  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:18:25.043208  596166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:18:25.043290  596166 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:18:25.046532  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 19:18:25.081867  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 19:18:25.081955  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 19:18:25.124527  596166 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 19:18:25.124596  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 19:18:25.153742  596166 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 19:18:25.153821  596166 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 19:18:25.157527  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 19:18:25.157611  596166 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 19:18:25.161417  596166 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 19:18:25.161501  596166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 19:18:25.200266  596166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:18:25.200345  596166 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:18:25.302537  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 19:18:25.302629  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 19:18:25.313977  596166 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 19:18:25.314052  596166 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 19:18:25.344887  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 19:18:25.374671  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 19:18:25.374697  596166 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 19:18:25.390940  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:18:25.397846  596166 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 19:18:25.397920  596166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 19:18:25.477894  596166 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 19:18:25.477965  596166 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 19:18:25.509396  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 19:18:25.509473  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 19:18:25.544168  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 19:18:25.544242  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 19:18:25.550883  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 19:18:25.550956  596166 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 19:18:25.613718  596166 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 19:18:25.613789  596166 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 19:18:25.663232  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 19:18:25.679914  596166 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 19:18:25.680003  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 19:18:25.720539  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 19:18:25.720617  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 19:18:25.777277  596166 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 19:18:25.777350  596166 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 19:18:25.827946  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 19:18:25.833583  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 19:18:25.833653  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 19:18:25.845964  596166 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 19:18:25.846024  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 19:18:25.959662  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 19:18:25.979524  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 19:18:25.979598  596166 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 19:18:26.157654  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 19:18:26.157725  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 19:18:26.300795  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 19:18:26.300866  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 19:18:26.415685  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 19:18:26.415757  596166 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 19:18:26.524213  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 19:18:27.319528  596166 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.029636665s)
	I0717 19:18:27.320596  596166 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.030267457s)
	I0717 19:18:27.320656  596166 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 19:18:27.320548  596166 node_ready.go:35] waiting up to 6m0s for node "addons-747597" to be "Ready" ...
	I0717 19:18:28.396203  596166 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-747597" context rescaled to 1 replicas
	I0717 19:18:29.252654  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.386181259s)
	I0717 19:18:29.252715  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.383353408s)
	I0717 19:18:29.388190  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:29.747335  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.846441641s)
	I0717 19:18:29.747614  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.801047662s)
	I0717 19:18:30.795202  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.795918173s)
	I0717 19:18:30.795237  596166 addons.go:475] Verifying addon ingress=true in "addons-747597"
	I0717 19:18:30.795418  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.790226384s)
	I0717 19:18:30.795588  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.748996903s)
	I0717 19:18:30.795624  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.450668132s)
	I0717 19:18:30.795634  596166 addons.go:475] Verifying addon registry=true in "addons-747597"
	I0717 19:18:30.795827  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.404804223s)
	I0717 19:18:30.795848  596166 addons.go:475] Verifying addon metrics-server=true in "addons-747597"
	I0717 19:18:30.795892  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.132587676s)
	I0717 19:18:30.797806  596166 out.go:177] * Verifying ingress addon...
	I0717 19:18:30.799390  596166 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-747597 service yakd-dashboard -n yakd-dashboard
	
	I0717 19:18:30.799412  596166 out.go:177] * Verifying registry addon...
	I0717 19:18:30.801508  596166 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 19:18:30.802940  596166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 19:18:30.808388  596166 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 19:18:30.808416  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:30.811328  596166 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 19:18:30.811346  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:30.856011  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.027972986s)
	W0717 19:18:30.856050  596166 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 19:18:30.856071  596166 retry.go:31] will retry after 273.179894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 19:18:30.856101  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.896355285s)
	I0717 19:18:31.130256  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 19:18:31.142172  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.617867337s)
	I0717 19:18:31.142209  596166 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-747597"
	I0717 19:18:31.144576  596166 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 19:18:31.146922  596166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 19:18:31.170208  596166 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 19:18:31.170236  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:31.307319  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:31.310050  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:31.651097  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:31.805576  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:31.816302  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:31.823696  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:32.153987  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:32.305168  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:32.308651  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:32.651161  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:32.805922  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:32.808149  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:32.962328  596166 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 19:18:32.962455  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:32.991424  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:33.149731  596166 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 19:18:33.153483  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:33.176163  596166 addons.go:234] Setting addon gcp-auth=true in "addons-747597"
	I0717 19:18:33.176257  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:33.176727  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:33.208652  596166 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 19:18:33.208713  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:33.227339  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:33.306984  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:33.309820  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:33.651219  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:33.809991  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:33.811401  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:33.829602  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:33.946116  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.815810535s)
	I0717 19:18:33.948805  596166 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 19:18:33.950746  596166 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 19:18:33.952953  596166 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 19:18:33.953019  596166 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 19:18:33.987943  596166 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 19:18:33.988016  596166 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 19:18:34.011142  596166 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 19:18:34.011218  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 19:18:34.036604  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 19:18:34.158145  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:34.305598  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:34.311312  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:34.657132  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:34.843261  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:34.854461  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:34.872688  596166 addons.go:475] Verifying addon gcp-auth=true in "addons-747597"
	I0717 19:18:34.874689  596166 out.go:177] * Verifying gcp-auth addon...
	I0717 19:18:34.877908  596166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 19:18:34.906030  596166 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 19:18:34.906097  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:35.153453  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:35.309214  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:35.310757  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:35.385210  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:35.651737  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:35.808644  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:35.810332  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:35.881862  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:36.151742  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:36.305968  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:36.307762  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:36.324579  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:36.381250  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:36.653345  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:36.812167  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:36.812837  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:36.883728  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:37.151929  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:37.313854  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:37.314855  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:37.382010  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:37.651476  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:37.805854  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:37.808156  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:37.881845  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:38.151070  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:38.308144  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:38.312868  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:38.381762  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:38.651293  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:38.806647  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:38.807575  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:38.824347  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:38.881161  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:39.150949  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:39.306958  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:39.310923  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:39.381844  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:39.651668  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:39.805890  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:39.807735  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:39.881989  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:40.151532  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:40.305331  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:40.307661  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:40.381914  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:40.651462  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:40.805823  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:40.808722  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:40.824601  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:40.881599  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:41.152055  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:41.305827  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:41.307933  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:41.381678  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:41.651686  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:41.805581  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:41.807717  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:41.882168  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:42.151907  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:42.305769  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:42.309383  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:42.381777  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:42.650973  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:42.806152  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:42.806479  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:42.881906  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:43.152003  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:43.306604  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:43.307730  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:43.324044  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:43.382700  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:43.652706  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:43.807553  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:43.808438  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:43.881942  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:44.150869  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:44.308112  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:44.308339  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:44.381910  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:44.651743  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:44.806068  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:44.806752  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:44.881707  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:45.153267  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:45.308026  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:45.308540  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:45.324959  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:45.381928  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:45.652014  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:45.806374  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:45.808510  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:45.881572  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:46.153864  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:46.307776  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:46.308816  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:46.381568  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:46.651902  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:46.806416  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:46.807084  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:46.881550  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:47.152096  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:47.307546  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:47.308244  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:47.325445  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:47.382575  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:47.652029  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:47.806453  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:47.808965  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:47.881524  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:48.152126  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:48.308513  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:48.309285  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:48.381826  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:48.652273  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:48.807335  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:48.808117  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:48.881853  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:49.152150  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:49.306828  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:49.308571  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:49.383107  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:49.651162  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:49.806751  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:49.808963  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:49.824074  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:49.881704  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:50.151359  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:50.306705  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:50.307760  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:50.381752  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:50.651619  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:50.807024  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:50.807449  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:50.882098  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:51.151250  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:51.305657  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:51.308847  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:51.381256  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:51.651480  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:51.806136  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:51.807202  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:51.824624  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:51.881772  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:52.151774  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:52.306360  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:52.307344  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:52.381749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:52.651935  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:52.805472  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:52.807723  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:52.881980  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:53.151292  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:53.305853  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:53.307401  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:53.381700  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:53.651753  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:53.806884  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:53.808011  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:53.889286  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:54.151976  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:54.313378  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:54.314032  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:54.331764  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:54.383098  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:54.654262  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:54.806665  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:54.808327  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:54.882581  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:55.151554  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:55.305817  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:55.315657  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:55.381699  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:55.651745  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:55.807453  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:55.807749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:55.881627  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:56.151663  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:56.306862  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:56.307842  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:56.381755  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:56.650910  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:56.805901  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:56.808027  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:56.825466  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:56.882089  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:57.150875  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:57.305187  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:57.308069  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:57.381468  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:57.652256  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:57.806833  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:57.807400  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:57.883074  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:58.152073  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:58.306166  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:58.308155  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:58.382027  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:58.651913  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:58.806042  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:58.807794  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:58.881973  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:59.151346  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:59.305221  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:59.307998  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:59.324003  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:59.381747  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:59.651620  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:59.806086  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:59.807734  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:59.881024  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:00.185974  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:00.315098  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:00.316770  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:00.382457  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:00.652109  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:00.805970  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:00.807534  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:00.881102  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:01.151765  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:01.307548  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:01.308147  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:01.324498  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:01.381758  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:01.651635  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:01.806635  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:01.807797  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:01.881253  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:02.151231  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:02.306158  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:02.308461  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:02.381888  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:02.651353  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:02.807051  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:02.807834  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:02.882233  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:03.155701  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:03.306867  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:03.307457  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:03.324734  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:03.381451  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:03.651600  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:03.806185  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:03.807213  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:03.881901  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:04.150845  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:04.306557  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:04.307248  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:04.382211  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:04.651901  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:04.806916  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:04.807647  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:04.881554  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:05.151320  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:05.306075  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:05.306839  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:05.381763  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:05.652015  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:05.806733  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:05.807311  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:05.824044  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:05.881977  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:06.152602  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:06.305586  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:06.306078  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:06.381886  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:06.651902  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:06.805699  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:06.809393  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:06.882120  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:07.151992  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:07.306583  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:07.307286  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:07.381476  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:07.651568  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:07.806261  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:07.807968  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:07.824812  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:07.882157  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:08.151405  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:08.305312  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:08.307691  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:08.381232  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:08.651209  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:08.806567  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:08.807262  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:08.881706  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:09.151567  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:09.306304  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:09.307034  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:09.381639  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:09.651215  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:09.805915  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:09.808275  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:09.825146  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:09.882100  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:10.183065  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:10.307399  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:10.309890  596166 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 19:19:10.309970  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:10.324200  596166 node_ready.go:49] node "addons-747597" has status "Ready":"True"
	I0717 19:19:10.324263  596166 node_ready.go:38] duration metric: took 43.003517192s for node "addons-747597" to be "Ready" ...
	I0717 19:19:10.324305  596166 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:19:10.356631  596166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vx2ls" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:10.389794  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:10.656099  596166 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 19:19:10.656133  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:10.808055  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:10.814348  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:10.882335  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:11.173896  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:11.308111  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:11.309427  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:11.407981  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:11.653558  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:11.809542  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:11.811022  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:11.863675  596166 pod_ready.go:92] pod "coredns-7db6d8ff4d-vx2ls" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.863708  596166 pod_ready.go:81] duration metric: took 1.506999835s for pod "coredns-7db6d8ff4d-vx2ls" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.863753  596166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.871274  596166 pod_ready.go:92] pod "etcd-addons-747597" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.871301  596166 pod_ready.go:81] duration metric: took 7.527058ms for pod "etcd-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.871316  596166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.876869  596166 pod_ready.go:92] pod "kube-apiserver-addons-747597" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.876893  596166 pod_ready.go:81] duration metric: took 5.5668ms for pod "kube-apiserver-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.876905  596166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.881695  596166 pod_ready.go:92] pod "kube-controller-manager-addons-747597" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.881720  596166 pod_ready.go:81] duration metric: took 4.80603ms for pod "kube-controller-manager-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.881733  596166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gcfj" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.882010  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:11.926098  596166 pod_ready.go:92] pod "kube-proxy-6gcfj" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.926126  596166 pod_ready.go:81] duration metric: took 44.38481ms for pod "kube-proxy-6gcfj" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.926138  596166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:12.154853  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:12.310754  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:12.312180  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:12.326469  596166 pod_ready.go:92] pod "kube-scheduler-addons-747597" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:12.326543  596166 pod_ready.go:81] duration metric: took 400.396085ms for pod "kube-scheduler-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:12.326570  596166 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:12.382978  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:12.657154  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:12.806779  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:12.820825  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:12.881903  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:13.164496  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:13.306314  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:13.310845  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:13.383013  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:13.661806  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:13.809706  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:13.811158  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:13.882520  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:14.152781  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:14.309497  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:14.310397  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:14.334777  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:14.382626  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:14.653900  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:14.809844  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:14.811947  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:14.882285  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:15.154070  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:15.308150  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:15.309580  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:15.382201  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:15.653751  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:15.807737  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:15.809013  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:15.884007  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:16.153401  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:16.306037  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:16.310276  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:16.382095  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:16.653006  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:16.809577  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:16.810910  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:16.836016  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:16.882922  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:17.156236  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:17.309968  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:17.311356  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:17.381282  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:17.652405  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:17.807019  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:17.808332  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:17.881907  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:18.154264  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:18.308003  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:18.308965  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:18.381303  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:18.653490  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:18.808635  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:18.809373  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:18.882936  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:19.162608  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:19.310839  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:19.313093  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:19.349937  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:19.383547  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:19.654989  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:19.806827  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:19.811500  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:19.881633  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:20.154059  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:20.309932  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:20.311120  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:20.381469  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:20.652973  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:20.807879  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:20.814657  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:20.881629  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:21.152940  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:21.306344  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:21.307644  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:21.382664  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:21.653320  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:21.811355  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:21.813506  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:21.836486  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:21.882004  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:22.152728  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:22.306762  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:22.313865  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:22.382585  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:22.654573  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:22.809399  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:22.821305  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:22.882275  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:23.153964  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:23.307399  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:23.308709  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:23.382295  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:23.652954  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:23.806953  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:23.809218  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:23.837073  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:23.886298  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:24.153776  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:24.311267  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:24.312528  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:24.382106  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:24.652682  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:24.805710  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:24.808747  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:24.881749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:25.153011  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:25.307854  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:25.309018  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:25.382333  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:25.653015  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:25.807502  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:25.810351  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:25.882315  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:26.173681  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:26.306630  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:26.315510  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:26.333181  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:26.382497  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:26.653657  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:26.812943  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:26.819499  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:26.882605  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:27.154963  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:27.307116  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:27.311207  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:27.383553  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:27.654194  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:27.812514  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:27.812844  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:27.881905  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:28.153420  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:28.309064  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:28.311211  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:28.333287  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:28.381587  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:28.654336  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:28.820278  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:28.830222  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:28.883166  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:29.155039  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:29.311066  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:29.312027  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:29.381578  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:29.653513  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:29.805743  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:29.809677  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:29.881960  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:30.153670  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:30.305870  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:30.308915  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:30.381428  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:30.652707  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:30.808551  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:30.809309  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:30.842609  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:30.887149  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:31.152745  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:31.308087  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:31.313520  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:31.381966  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:31.671728  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:31.807773  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:31.812621  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:31.881635  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:32.153463  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:32.307758  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:32.310159  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:32.381450  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:32.653190  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:32.808017  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:32.811751  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:32.849134  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:32.885772  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:33.158697  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:33.312754  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:33.315420  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:33.383101  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:33.657191  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:33.815613  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:33.817592  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:33.882091  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:34.153595  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:34.306812  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:34.310635  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:34.383007  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:34.653466  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:34.810103  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:34.811241  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:34.885070  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:35.153263  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:35.310122  596166 kapi.go:107] duration metric: took 1m4.507180996s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 19:19:35.311781  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:35.337707  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:35.382214  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:35.653452  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:35.815772  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:35.882827  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:36.153270  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:36.306402  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:36.381785  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:36.653179  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:36.805742  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:36.881758  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:37.155072  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:37.306618  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:37.382418  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:37.654182  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:37.808033  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:37.834031  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:37.882662  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:38.153582  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:38.307213  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:38.382715  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:38.652761  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:38.807810  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:38.881841  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:39.153094  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:39.306349  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:39.381430  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:39.653298  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:39.805591  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:39.882045  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:40.153032  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:40.306007  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:40.332879  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:40.382112  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:40.653622  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:40.806975  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:40.882236  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:41.154009  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:41.306620  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:41.382412  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:41.657924  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:41.806804  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:41.892602  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:42.163353  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:42.308064  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:42.335644  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:42.382922  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:42.653935  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:42.806568  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:42.882199  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:43.153505  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:43.306811  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:43.381463  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:43.652349  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:43.806333  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:43.881382  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:44.153025  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:44.307331  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:44.382737  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:44.652474  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:44.806338  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:44.833482  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:44.882529  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:45.169905  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:45.315822  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:45.382675  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:45.652808  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:45.805938  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:45.882335  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:46.153223  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:46.306372  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:46.382265  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:46.652582  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:46.807586  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:46.835261  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:46.881822  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:47.154518  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:47.307251  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:47.381981  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:47.653379  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:47.806308  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:47.882318  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:48.152511  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:48.307118  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:48.381949  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:48.653037  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:48.806865  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:48.883276  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:49.152088  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:49.306972  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:49.333904  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:49.381449  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:49.652515  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:49.805847  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:49.882319  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:50.152458  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:50.306491  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:50.382336  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:50.653643  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:50.809402  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:50.881991  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:51.157787  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:51.306469  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:51.335065  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:51.383749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:51.654822  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:51.806532  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:51.881619  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:52.160237  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:52.306759  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:52.381884  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:52.653187  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:52.806547  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:52.882156  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:53.153362  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:53.305639  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:53.381520  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:53.655117  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:53.807254  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:53.834703  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:53.882298  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:54.153303  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:54.306036  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:54.381994  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:54.665001  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:54.806638  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:54.882230  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:55.156192  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:55.306504  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:55.381598  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:55.653430  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:55.807970  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:55.882402  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:56.156266  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:56.306462  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:56.337581  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:56.381658  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:56.653340  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:56.806389  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:56.882525  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:57.152862  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:57.306087  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:57.382028  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:57.654059  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:57.806929  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:57.881405  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:58.153680  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:58.306268  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:58.381980  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:58.655452  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:58.806712  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:58.833168  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:58.883564  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:59.154068  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:59.306585  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:59.382534  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:59.653043  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:59.807881  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:59.882446  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:00.213245  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:00.314393  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:00.438957  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:00.670828  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:00.807909  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:00.843623  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:00.911807  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:01.153733  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:01.306984  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:01.382865  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:01.653711  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:01.807397  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:01.882678  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:02.155443  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:02.307257  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:02.390766  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:02.654349  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:02.808461  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:02.884096  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:03.153592  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:03.305936  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:03.336214  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:03.381756  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:03.654714  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:03.806507  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:03.882657  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:04.153795  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:04.306762  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:04.385065  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:04.653872  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:04.806822  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:04.882153  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:05.155446  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:05.306162  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:05.381965  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:05.655543  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:05.806720  596166 kapi.go:107] duration metric: took 1m35.005213294s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 19:20:05.833404  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:05.881832  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:06.153064  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:06.381429  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:06.654339  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:06.882586  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:07.155649  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:07.382706  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:07.653210  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:07.882113  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:08.155707  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:08.332982  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:08.381401  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:08.652807  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:08.881676  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:09.153617  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:09.382170  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:09.653648  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:09.882749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:10.153215  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:10.337659  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:10.382942  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:10.652602  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:10.882153  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:11.152745  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:11.381721  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:11.657139  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:11.882033  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:12.153454  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:12.381926  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:12.652935  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:12.833334  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:12.881646  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:13.152452  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:13.381976  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:13.655784  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:13.881977  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:14.153320  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:14.381734  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:14.652094  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:14.881490  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:15.163134  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:15.333142  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:15.381501  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:15.653424  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:15.881821  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:16.155010  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:16.382060  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:16.655782  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:16.881942  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:17.153810  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:17.381841  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:17.653746  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:17.832570  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:17.881635  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:18.155558  596166 kapi.go:107] duration metric: took 1m47.008633567s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 19:20:18.382626  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:18.882264  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:19.381699  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:19.832851  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:19.881282  596166 kapi.go:107] duration metric: took 1m45.003372639s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 19:20:19.894018  596166 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-747597 cluster.
	I0717 19:20:19.896084  596166 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 19:20:19.897790  596166 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 19:20:19.899801  596166 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, storage-provisioner-rancher, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0717 19:20:19.901709  596166 addons.go:510] duration metric: took 1m56.052205859s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner storage-provisioner-rancher nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0717 19:20:21.833250  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:24.333359  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:26.833059  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:29.332636  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:31.333398  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:33.333610  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:35.334910  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:37.833178  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:38.332848  596166 pod_ready.go:92] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:38.332875  596166 pod_ready.go:81] duration metric: took 1m26.006285543s for pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:38.332888  596166 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-8tq66" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:38.338079  596166 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-8tq66" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:38.338105  596166 pod_ready.go:81] duration metric: took 5.208498ms for pod "nvidia-device-plugin-daemonset-8tq66" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:38.338125  596166 pod_ready.go:38] duration metric: took 1m28.013796055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:20:38.338140  596166 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:20:38.338885  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:20:38.338956  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:20:38.392473  596166 cri.go:89] found id: "e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:20:38.392512  596166 cri.go:89] found id: ""
	I0717 19:20:38.392522  596166 logs.go:276] 1 containers: [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957]
	I0717 19:20:38.392586  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.396885  596166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:20:38.396966  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:20:38.440953  596166 cri.go:89] found id: "aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:20:38.440973  596166 cri.go:89] found id: ""
	I0717 19:20:38.440980  596166 logs.go:276] 1 containers: [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18]
	I0717 19:20:38.441037  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.444468  596166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:20:38.444542  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:20:38.484525  596166 cri.go:89] found id: "6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:20:38.484546  596166 cri.go:89] found id: ""
	I0717 19:20:38.484554  596166 logs.go:276] 1 containers: [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38]
	I0717 19:20:38.484617  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.488077  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:20:38.488153  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:20:38.530668  596166 cri.go:89] found id: "498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:20:38.530692  596166 cri.go:89] found id: ""
	I0717 19:20:38.530700  596166 logs.go:276] 1 containers: [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481]
	I0717 19:20:38.530801  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.534518  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:20:38.534619  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:20:38.573602  596166 cri.go:89] found id: "61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:20:38.573622  596166 cri.go:89] found id: ""
	I0717 19:20:38.573630  596166 logs.go:276] 1 containers: [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc]
	I0717 19:20:38.573687  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.577044  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:20:38.577117  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:20:38.616783  596166 cri.go:89] found id: "4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:20:38.616803  596166 cri.go:89] found id: ""
	I0717 19:20:38.616811  596166 logs.go:276] 1 containers: [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00]
	I0717 19:20:38.616867  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.620301  596166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:20:38.620402  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:20:38.661177  596166 cri.go:89] found id: "b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:20:38.661200  596166 cri.go:89] found id: ""
	I0717 19:20:38.661208  596166 logs.go:276] 1 containers: [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163]
	I0717 19:20:38.661265  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.664587  596166 logs.go:123] Gathering logs for kube-scheduler [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481] ...
	I0717 19:20:38.664629  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:20:38.707813  596166 logs.go:123] Gathering logs for kube-proxy [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc] ...
	I0717 19:20:38.707841  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:20:38.746640  596166 logs.go:123] Gathering logs for kindnet [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163] ...
	I0717 19:20:38.746668  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:20:38.801090  596166 logs.go:123] Gathering logs for kubelet ...
	I0717 19:20:38.801119  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 19:20:38.856864  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:38.857108  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:38.857287  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:38.857480  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:20:38.890457  596166 logs.go:123] Gathering logs for etcd [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18] ...
	I0717 19:20:38.890492  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:20:38.938571  596166 logs.go:123] Gathering logs for coredns [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38] ...
	I0717 19:20:38.938606  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:20:38.986363  596166 logs.go:123] Gathering logs for kube-controller-manager [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00] ...
	I0717 19:20:38.986398  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:20:39.068362  596166 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:20:39.068405  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:20:39.161501  596166 logs.go:123] Gathering logs for container status ...
	I0717 19:20:39.161541  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:20:39.216659  596166 logs.go:123] Gathering logs for dmesg ...
	I0717 19:20:39.216689  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:20:39.237417  596166 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:20:39.237447  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:20:39.405954  596166 logs.go:123] Gathering logs for kube-apiserver [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957] ...
	I0717 19:20:39.405984  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:20:39.458628  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:20:39.458660  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 19:20:39.458708  596166 out.go:239] X Problems detected in kubelet:
	W0717 19:20:39.458720  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:39.458730  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:39.458742  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:39.458749  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:20:39.458758  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:20:39.458764  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:20:49.460164  596166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:49.473636  596166 api_server.go:72] duration metric: took 2m25.624488221s to wait for apiserver process to appear ...
	I0717 19:20:49.473664  596166 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:20:49.473696  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:20:49.473754  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:20:49.513244  596166 cri.go:89] found id: "e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:20:49.513264  596166 cri.go:89] found id: ""
	I0717 19:20:49.513272  596166 logs.go:276] 1 containers: [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957]
	I0717 19:20:49.513330  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.517172  596166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:20:49.517242  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:20:49.558151  596166 cri.go:89] found id: "aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:20:49.558183  596166 cri.go:89] found id: ""
	I0717 19:20:49.558193  596166 logs.go:276] 1 containers: [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18]
	I0717 19:20:49.558267  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.561725  596166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:20:49.561796  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:20:49.601996  596166 cri.go:89] found id: "6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:20:49.602019  596166 cri.go:89] found id: ""
	I0717 19:20:49.602026  596166 logs.go:276] 1 containers: [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38]
	I0717 19:20:49.602084  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.605540  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:20:49.605618  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:20:49.645276  596166 cri.go:89] found id: "498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:20:49.645299  596166 cri.go:89] found id: ""
	I0717 19:20:49.645307  596166 logs.go:276] 1 containers: [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481]
	I0717 19:20:49.645362  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.648759  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:20:49.648829  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:20:49.686778  596166 cri.go:89] found id: "61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:20:49.686798  596166 cri.go:89] found id: ""
	I0717 19:20:49.686807  596166 logs.go:276] 1 containers: [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc]
	I0717 19:20:49.686880  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.690464  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:20:49.690537  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:20:49.729136  596166 cri.go:89] found id: "4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:20:49.729169  596166 cri.go:89] found id: ""
	I0717 19:20:49.729178  596166 logs.go:276] 1 containers: [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00]
	I0717 19:20:49.729253  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.732947  596166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:20:49.733019  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:20:49.775402  596166 cri.go:89] found id: "b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:20:49.775427  596166 cri.go:89] found id: ""
	I0717 19:20:49.775435  596166 logs.go:276] 1 containers: [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163]
	I0717 19:20:49.775499  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.779243  596166 logs.go:123] Gathering logs for kube-scheduler [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481] ...
	I0717 19:20:49.779271  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:20:49.823283  596166 logs.go:123] Gathering logs for kube-proxy [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc] ...
	I0717 19:20:49.823312  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:20:49.867035  596166 logs.go:123] Gathering logs for kube-controller-manager [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00] ...
	I0717 19:20:49.867061  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:20:49.956705  596166 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:20:49.956742  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:20:50.067188  596166 logs.go:123] Gathering logs for dmesg ...
	I0717 19:20:50.067229  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:20:50.088589  596166 logs.go:123] Gathering logs for etcd [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18] ...
	I0717 19:20:50.088626  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:20:50.142767  596166 logs.go:123] Gathering logs for coredns [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38] ...
	I0717 19:20:50.142804  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:20:50.208211  596166 logs.go:123] Gathering logs for kindnet [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163] ...
	I0717 19:20:50.208242  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:20:50.259213  596166 logs.go:123] Gathering logs for container status ...
	I0717 19:20:50.259246  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:20:50.329925  596166 logs.go:123] Gathering logs for kubelet ...
	I0717 19:20:50.329953  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 19:20:50.378251  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.378471  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.378647  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.378838  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:20:50.413960  596166 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:20:50.413992  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:20:50.562856  596166 logs.go:123] Gathering logs for kube-apiserver [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957] ...
	I0717 19:20:50.562892  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:20:50.620147  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:20:50.620178  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 19:20:50.620227  596166 out.go:239] X Problems detected in kubelet:
	W0717 19:20:50.620239  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.620246  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.620260  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.620273  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:20:50.620285  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:20:50.620291  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:21:00.620932  596166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 19:21:00.657740  596166 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 19:21:00.660419  596166 api_server.go:141] control plane version: v1.30.2
	I0717 19:21:00.660443  596166 api_server.go:131] duration metric: took 11.186772098s to wait for apiserver health ...
	I0717 19:21:00.660453  596166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:21:00.660474  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:21:00.660536  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:21:00.714414  596166 cri.go:89] found id: "e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:21:00.714436  596166 cri.go:89] found id: ""
	I0717 19:21:00.714444  596166 logs.go:276] 1 containers: [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957]
	I0717 19:21:00.714501  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.718323  596166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:21:00.718398  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:21:00.763288  596166 cri.go:89] found id: "aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:21:00.763310  596166 cri.go:89] found id: ""
	I0717 19:21:00.763318  596166 logs.go:276] 1 containers: [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18]
	I0717 19:21:00.763391  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.767433  596166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:21:00.767497  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:21:00.806950  596166 cri.go:89] found id: "6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:21:00.806972  596166 cri.go:89] found id: ""
	I0717 19:21:00.806981  596166 logs.go:276] 1 containers: [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38]
	I0717 19:21:00.807038  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.810420  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:21:00.810508  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:21:00.853090  596166 cri.go:89] found id: "498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:21:00.853111  596166 cri.go:89] found id: ""
	I0717 19:21:00.853119  596166 logs.go:276] 1 containers: [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481]
	I0717 19:21:00.853196  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.856635  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:21:00.856716  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:21:00.897080  596166 cri.go:89] found id: "61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:21:00.897113  596166 cri.go:89] found id: ""
	I0717 19:21:00.897122  596166 logs.go:276] 1 containers: [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc]
	I0717 19:21:00.897209  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.900748  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:21:00.900871  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:21:00.938419  596166 cri.go:89] found id: "4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:21:00.938442  596166 cri.go:89] found id: ""
	I0717 19:21:00.938450  596166 logs.go:276] 1 containers: [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00]
	I0717 19:21:00.938526  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.942361  596166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:21:00.942462  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:21:00.983503  596166 cri.go:89] found id: "b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:21:00.983565  596166 cri.go:89] found id: ""
	I0717 19:21:00.983599  596166 logs.go:276] 1 containers: [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163]
	I0717 19:21:00.983671  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.987253  596166 logs.go:123] Gathering logs for kindnet [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163] ...
	I0717 19:21:00.987278  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:21:01.062314  596166 logs.go:123] Gathering logs for container status ...
	I0717 19:21:01.062346  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:21:01.112582  596166 logs.go:123] Gathering logs for dmesg ...
	I0717 19:21:01.112619  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:21:01.131715  596166 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:21:01.131747  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:21:01.277187  596166 logs.go:123] Gathering logs for kube-apiserver [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957] ...
	I0717 19:21:01.277215  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:21:01.337345  596166 logs.go:123] Gathering logs for kube-scheduler [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481] ...
	I0717 19:21:01.337380  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:21:01.377863  596166 logs.go:123] Gathering logs for kube-controller-manager [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00] ...
	I0717 19:21:01.377898  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:21:01.447789  596166 logs.go:123] Gathering logs for kubelet ...
	I0717 19:21:01.447826  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 19:21:01.498689  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.498938  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.499121  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.499313  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:21:01.534308  596166 logs.go:123] Gathering logs for etcd [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18] ...
	I0717 19:21:01.534339  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:21:01.583096  596166 logs.go:123] Gathering logs for coredns [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38] ...
	I0717 19:21:01.583131  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:21:01.638408  596166 logs.go:123] Gathering logs for kube-proxy [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc] ...
	I0717 19:21:01.638445  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:21:01.676654  596166 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:21:01.676682  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:21:01.771632  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:21:01.771664  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 19:21:01.771747  596166 out.go:239] X Problems detected in kubelet:
	W0717 19:21:01.771760  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.771786  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.771805  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.771818  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:21:01.771824  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:21:01.771830  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:21:11.784350  596166 system_pods.go:59] 18 kube-system pods found
	I0717 19:21:11.784389  596166 system_pods.go:61] "coredns-7db6d8ff4d-vx2ls" [082916ef-1119-4778-9742-38e8695b17eb] Running
	I0717 19:21:11.784399  596166 system_pods.go:61] "csi-hostpath-attacher-0" [1358f44c-4762-4923-af91-c24f5aac1261] Running
	I0717 19:21:11.784404  596166 system_pods.go:61] "csi-hostpath-resizer-0" [9ff03202-1a5b-4edd-8712-5fb2b57bc80d] Running
	I0717 19:21:11.784408  596166 system_pods.go:61] "csi-hostpathplugin-b2j8t" [46f4a30f-3aa2-4a55-93a0-d60b33eb8447] Running
	I0717 19:21:11.784412  596166 system_pods.go:61] "etcd-addons-747597" [604c419d-7405-426d-8546-7b8a298fd63f] Running
	I0717 19:21:11.784417  596166 system_pods.go:61] "kindnet-hr4v9" [249b1478-18aa-46b8-ac5c-c98c42238bcd] Running
	I0717 19:21:11.784421  596166 system_pods.go:61] "kube-apiserver-addons-747597" [9cdb0970-bdae-46ff-835b-309056cdb2f3] Running
	I0717 19:21:11.784426  596166 system_pods.go:61] "kube-controller-manager-addons-747597" [72516744-2858-4838-a858-6f42cefe9915] Running
	I0717 19:21:11.784430  596166 system_pods.go:61] "kube-ingress-dns-minikube" [21c73a81-efe6-4fc7-b825-b2655ceeaab5] Running
	I0717 19:21:11.784444  596166 system_pods.go:61] "kube-proxy-6gcfj" [ad90d9f5-2b4a-49c6-b1e8-b3dd0668fa24] Running
	I0717 19:21:11.784452  596166 system_pods.go:61] "kube-scheduler-addons-747597" [4224e17d-41c4-4b65-967d-19655bbedcfa] Running
	I0717 19:21:11.784456  596166 system_pods.go:61] "metrics-server-c59844bb4-m2zcj" [ecfedd7e-e869-4dd1-b482-62f0706cc601] Running
	I0717 19:21:11.784460  596166 system_pods.go:61] "nvidia-device-plugin-daemonset-8tq66" [e1a33d1c-572f-4efa-b24a-abffc419c427] Running
	I0717 19:21:11.784464  596166 system_pods.go:61] "registry-656c9c8d9c-4kkkf" [9820910e-bb3a-48fe-b2d1-5c69c2b66429] Running
	I0717 19:21:11.784470  596166 system_pods.go:61] "registry-proxy-qczlm" [dc1faa8a-6f1b-41a9-b047-b18156274ad5] Running
	I0717 19:21:11.784475  596166 system_pods.go:61] "snapshot-controller-745499f584-f69f7" [a944a321-09b4-4286-9302-a0657345e9b7] Running
	I0717 19:21:11.784482  596166 system_pods.go:61] "snapshot-controller-745499f584-tbjqv" [12edb1ce-6753-4e89-a1b4-f6bbfad2d478] Running
	I0717 19:21:11.784486  596166 system_pods.go:61] "storage-provisioner" [3d085cc1-2744-4f4a-a266-eb70ec60d46a] Running
	I0717 19:21:11.784492  596166 system_pods.go:74] duration metric: took 11.124033696s to wait for pod list to return data ...
	I0717 19:21:11.784504  596166 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:21:11.786738  596166 default_sa.go:45] found service account: "default"
	I0717 19:21:11.786762  596166 default_sa.go:55] duration metric: took 2.252467ms for default service account to be created ...
	I0717 19:21:11.786772  596166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:21:11.796544  596166 system_pods.go:86] 18 kube-system pods found
	I0717 19:21:11.796644  596166 system_pods.go:89] "coredns-7db6d8ff4d-vx2ls" [082916ef-1119-4778-9742-38e8695b17eb] Running
	I0717 19:21:11.796666  596166 system_pods.go:89] "csi-hostpath-attacher-0" [1358f44c-4762-4923-af91-c24f5aac1261] Running
	I0717 19:21:11.796684  596166 system_pods.go:89] "csi-hostpath-resizer-0" [9ff03202-1a5b-4edd-8712-5fb2b57bc80d] Running
	I0717 19:21:11.796715  596166 system_pods.go:89] "csi-hostpathplugin-b2j8t" [46f4a30f-3aa2-4a55-93a0-d60b33eb8447] Running
	I0717 19:21:11.796799  596166 system_pods.go:89] "etcd-addons-747597" [604c419d-7405-426d-8546-7b8a298fd63f] Running
	I0717 19:21:11.796823  596166 system_pods.go:89] "kindnet-hr4v9" [249b1478-18aa-46b8-ac5c-c98c42238bcd] Running
	I0717 19:21:11.796842  596166 system_pods.go:89] "kube-apiserver-addons-747597" [9cdb0970-bdae-46ff-835b-309056cdb2f3] Running
	I0717 19:21:11.796856  596166 system_pods.go:89] "kube-controller-manager-addons-747597" [72516744-2858-4838-a858-6f42cefe9915] Running
	I0717 19:21:11.796862  596166 system_pods.go:89] "kube-ingress-dns-minikube" [21c73a81-efe6-4fc7-b825-b2655ceeaab5] Running
	I0717 19:21:11.796869  596166 system_pods.go:89] "kube-proxy-6gcfj" [ad90d9f5-2b4a-49c6-b1e8-b3dd0668fa24] Running
	I0717 19:21:11.796874  596166 system_pods.go:89] "kube-scheduler-addons-747597" [4224e17d-41c4-4b65-967d-19655bbedcfa] Running
	I0717 19:21:11.796882  596166 system_pods.go:89] "metrics-server-c59844bb4-m2zcj" [ecfedd7e-e869-4dd1-b482-62f0706cc601] Running
	I0717 19:21:11.796886  596166 system_pods.go:89] "nvidia-device-plugin-daemonset-8tq66" [e1a33d1c-572f-4efa-b24a-abffc419c427] Running
	I0717 19:21:11.796890  596166 system_pods.go:89] "registry-656c9c8d9c-4kkkf" [9820910e-bb3a-48fe-b2d1-5c69c2b66429] Running
	I0717 19:21:11.796896  596166 system_pods.go:89] "registry-proxy-qczlm" [dc1faa8a-6f1b-41a9-b047-b18156274ad5] Running
	I0717 19:21:11.796903  596166 system_pods.go:89] "snapshot-controller-745499f584-f69f7" [a944a321-09b4-4286-9302-a0657345e9b7] Running
	I0717 19:21:11.796932  596166 system_pods.go:89] "snapshot-controller-745499f584-tbjqv" [12edb1ce-6753-4e89-a1b4-f6bbfad2d478] Running
	I0717 19:21:11.796942  596166 system_pods.go:89] "storage-provisioner" [3d085cc1-2744-4f4a-a266-eb70ec60d46a] Running
	I0717 19:21:11.796950  596166 system_pods.go:126] duration metric: took 10.172942ms to wait for k8s-apps to be running ...
	I0717 19:21:11.796962  596166 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:21:11.797031  596166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:21:11.808603  596166 system_svc.go:56] duration metric: took 11.631622ms WaitForService to wait for kubelet
	I0717 19:21:11.808633  596166 kubeadm.go:582] duration metric: took 2m47.959489442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:21:11.808654  596166 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:21:11.812662  596166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 19:21:11.812697  596166 node_conditions.go:123] node cpu capacity is 2
	I0717 19:21:11.812709  596166 node_conditions.go:105] duration metric: took 4.049739ms to run NodePressure ...
	I0717 19:21:11.812731  596166 start.go:241] waiting for startup goroutines ...
	I0717 19:21:11.812740  596166 start.go:246] waiting for cluster config update ...
	I0717 19:21:11.812758  596166 start.go:255] writing updated cluster config ...
	I0717 19:21:11.813047  596166 ssh_runner.go:195] Run: rm -f paused
	I0717 19:21:12.165166  596166 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:21:12.169124  596166 out.go:177] * Done! kubectl is now configured to use "addons-747597" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.179704574Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=315ac207-e54e-4b76-847f-eca3f7074df5 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.181167282Z" level=info msg="Creating container: default/hello-world-app-6778b5fc9f-9s966/hello-world-app" id=308b2c20-3963-4784-bdf2-0fb5851bb9c6 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.181276968Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.202353977Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0655a91b83c4560bc16fc7032438ec82a86beda023bea3124aa6a977000af5c7/merged/etc/passwd: no such file or directory"
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.202399794Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0655a91b83c4560bc16fc7032438ec82a86beda023bea3124aa6a977000af5c7/merged/etc/group: no such file or directory"
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.243647378Z" level=info msg="Created container f03a99eddc5ee203276199e62fe233e8fb621acabd862c05e11eb0eb6c160dbd: default/hello-world-app-6778b5fc9f-9s966/hello-world-app" id=308b2c20-3963-4784-bdf2-0fb5851bb9c6 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.244437235Z" level=info msg="Starting container: f03a99eddc5ee203276199e62fe233e8fb621acabd862c05e11eb0eb6c160dbd" id=afd7f9a4-0adf-4cd7-ad97-f1223ab72d5a name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.255581330Z" level=info msg="Started container" PID=8046 containerID=f03a99eddc5ee203276199e62fe233e8fb621acabd862c05e11eb0eb6c160dbd description=default/hello-world-app-6778b5fc9f-9s966/hello-world-app id=afd7f9a4-0adf-4cd7-ad97-f1223ab72d5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b0240e0948ce6c0e2eacde8f9f54426c816f6a598a7f2056e7a6e26b18f4afd
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.600055676Z" level=info msg="Removing container: ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f" id=be746227-c72f-414a-8972-105cc49a7284 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 19:24:57 addons-747597 crio[966]: time="2024-07-17 19:24:57.621724216Z" level=info msg="Removed container ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=be746227-c72f-414a-8972-105cc49a7284 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 19:24:59 addons-747597 crio[966]: time="2024-07-17 19:24:59.336194082Z" level=info msg="Stopping container: f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b (timeout: 2s)" id=65f6a635-4459-43b2-ac42-760fc502fda6 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.342701703Z" level=warning msg="Stopping container f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=65f6a635-4459-43b2-ac42-760fc502fda6 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 19:25:01 addons-747597 conmon[4704]: conmon f6d015a1ea3b43d01a4f <ninfo>: container 4716 exited with status 137
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.491066734Z" level=info msg="Stopped container f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b: ingress-nginx/ingress-nginx-controller-768f948f8f-ck4j6/controller" id=65f6a635-4459-43b2-ac42-760fc502fda6 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.491732858Z" level=info msg="Stopping pod sandbox: fd7803a66233a7690114fa4653765dfc58aec598b8fe080ed161c53308bdaf31" id=9ebbd852-f102-40b8-90e2-bf8c71d97422 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.495434748Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-C4EFN4QC3ZI62NWM - [0:0]\n:KUBE-HP-WLP5SX5GPUSB33WO - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-C4EFN4QC3ZI62NWM\n-X KUBE-HP-WLP5SX5GPUSB33WO\nCOMMIT\n"
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.496895749Z" level=info msg="Closing host port tcp:80"
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.496943528Z" level=info msg="Closing host port tcp:443"
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.498324570Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.498357292Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.498537829Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-768f948f8f-ck4j6 Namespace:ingress-nginx ID:fd7803a66233a7690114fa4653765dfc58aec598b8fe080ed161c53308bdaf31 UID:b8db8990-e740-4420-98d1-f8f1a63f2954 NetNS:/var/run/netns/8f4cf54b-5035-4701-8141-677ff34feb6a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.498694489Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-ck4j6 from CNI network \"kindnet\" (type=ptp)"
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.517071288Z" level=info msg="Stopped pod sandbox: fd7803a66233a7690114fa4653765dfc58aec598b8fe080ed161c53308bdaf31" id=9ebbd852-f102-40b8-90e2-bf8c71d97422 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.613487971Z" level=info msg="Removing container: f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b" id=911f6f0a-1a20-4c45-afa2-fb27125f9035 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 19:25:01 addons-747597 crio[966]: time="2024-07-17 19:25:01.631235159Z" level=info msg="Removed container f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b: ingress-nginx/ingress-nginx-controller-768f948f8f-ck4j6/controller" id=911f6f0a-1a20-4c45-afa2-fb27125f9035 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f03a99eddc5ee       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   2b0240e0948ce       hello-world-app-6778b5fc9f-9s966
	a9ce00812b756       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   55085fd909ea2       nginx
	167447215a84b       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        3 minutes ago       Running             headlamp                  0                   e0e0d584ddf79       headlamp-7867546754-g6rr2
	78c53578da440       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 4 minutes ago       Running             gcp-auth                  0                   3d73319215130       gcp-auth-5db96cd9b4-twc52
	2c5f0c15cf301       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              patch                     0                   30f06f9260db2       ingress-nginx-admission-patch-4t94j
	769fb0f4d544f       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              5 minutes ago       Running             yakd                      0                   04e3e03489361       yakd-dashboard-799879c74f-ftstw
	799d68539952b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                    0                   37a5a3ad7daa6       ingress-nginx-admission-create-m94z8
	415ce64e87ebf       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   7ca326c992ed6       metrics-server-c59844bb4-m2zcj
	ba3ec42298409       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   d7bf4964831cc       storage-provisioner
	6b259081db958       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                   0                   9a46fd8b7c114       coredns-7db6d8ff4d-vx2ls
	b1015172052bc       docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493                           6 minutes ago       Running             kindnet-cni               0                   a618100571d9f       kindnet-hr4v9
	61ff260c86790       66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae                                                             6 minutes ago       Running             kube-proxy                0                   043e01218f51d       kube-proxy-6gcfj
	e41f5b0b2a396       84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0                                                             7 minutes ago       Running             kube-apiserver            0                   0c6ee66dc17d0       kube-apiserver-addons-747597
	498353d1326cf       c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5                                                             7 minutes ago       Running             kube-scheduler            0                   3f1f5a0bc5736       kube-scheduler-addons-747597
	4b65ebb30b9af       e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567                                                             7 minutes ago       Running             kube-controller-manager   0                   ba3f2aae5569f       kube-controller-manager-addons-747597
	aafaeaa9e53bf       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             7 minutes ago       Running             etcd                      0                   ccb207d7f192e       etcd-addons-747597
	
	
	==> coredns [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38] <==
	[INFO] 10.244.0.3:50454 - 20574 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001673695s
	[INFO] 10.244.0.3:36762 - 59039 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119425s
	[INFO] 10.244.0.3:36762 - 56985 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041953s
	[INFO] 10.244.0.3:41859 - 40266 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160016s
	[INFO] 10.244.0.3:41859 - 59478 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177271s
	[INFO] 10.244.0.3:40088 - 17247 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070818s
	[INFO] 10.244.0.3:40088 - 42048 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054646s
	[INFO] 10.244.0.3:33440 - 18761 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108882s
	[INFO] 10.244.0.3:33440 - 26187 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00013083s
	[INFO] 10.244.0.3:46143 - 33670 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002220607s
	[INFO] 10.244.0.3:46143 - 14724 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002434572s
	[INFO] 10.244.0.3:43331 - 27145 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102416s
	[INFO] 10.244.0.3:43331 - 45579 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000065583s
	[INFO] 10.244.0.20:55824 - 63825 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001411271s
	[INFO] 10.244.0.20:45109 - 8791 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001537949s
	[INFO] 10.244.0.20:56733 - 22193 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142843s
	[INFO] 10.244.0.20:50646 - 47549 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096394s
	[INFO] 10.244.0.20:58217 - 64747 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099101s
	[INFO] 10.244.0.20:46105 - 34875 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103819s
	[INFO] 10.244.0.20:36242 - 4054 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002926296s
	[INFO] 10.244.0.20:60183 - 37689 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002563531s
	[INFO] 10.244.0.20:54941 - 64368 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000828183s
	[INFO] 10.244.0.20:54211 - 15697 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001061593s
	[INFO] 10.244.0.22:35302 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196291s
	[INFO] 10.244.0.22:42006 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129124s
	
	
	==> describe nodes <==
	Name:               addons-747597
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-747597
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=addons-747597
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T19_18_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-747597
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-747597
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:24:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:22:45 +0000   Wed, 17 Jul 2024 19:18:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:22:45 +0000   Wed, 17 Jul 2024 19:18:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:22:45 +0000   Wed, 17 Jul 2024 19:18:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:22:45 +0000   Wed, 17 Jul 2024 19:19:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-747597
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cb652b4b04d43dbb605d68e346e8a8e
	  System UUID:                242ae9c1-ad18-41b5-803f-f2a7108e3122
	  Boot ID:                    69f17618-36a4-458d-bf7b-8c41eea0ca4f
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-9s966         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-5db96cd9b4-twc52                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  headlamp                    headlamp-7867546754-g6rr2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 coredns-7db6d8ff4d-vx2ls                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m43s
	  kube-system                 etcd-addons-747597                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m56s
	  kube-system                 kindnet-hr4v9                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m43s
	  kube-system                 kube-apiserver-addons-747597             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 kube-controller-manager-addons-747597    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	  kube-system                 kube-proxy-6gcfj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-scheduler-addons-747597             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	  kube-system                 metrics-server-c59844bb4-m2zcj           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m37s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  yakd-dashboard              yakd-dashboard-799879c74f-ftstw          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m36s                kube-proxy       
	  Normal  NodeHasSufficientMemory  7m4s (x8 over 7m4s)  kubelet          Node addons-747597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m4s (x8 over 7m4s)  kubelet          Node addons-747597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m4s (x8 over 7m4s)  kubelet          Node addons-747597 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m56s                kubelet          Node addons-747597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m56s                kubelet          Node addons-747597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m56s                kubelet          Node addons-747597 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m43s                node-controller  Node addons-747597 event: Registered Node addons-747597 in Controller
	  Normal  NodeReady                5m56s                kubelet          Node addons-747597 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001089] FS-Cache: O-key=[8] 'e23a5c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000ae89f91f{9p.inode} n=000000008afc45ed
	[  +0.001075] FS-Cache: N-key=[8] 'e23a5c0100000000'
	[  +0.002289] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000967] FS-Cache: O-cookie d=00000000ae89f91f{9p.inode} n=00000000d550a1d6
	[  +0.001111] FS-Cache: O-key=[8] 'e23a5c0100000000'
	[  +0.000746] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000ae89f91f{9p.inode} n=000000000f99c7cd
	[  +0.001063] FS-Cache: N-key=[8] 'e23a5c0100000000'
	[  +2.685345] FS-Cache: Duplicate cookie detected
	[  +0.000810] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000ae89f91f{9p.inode} n=000000007d1888d1
	[  +0.001092] FS-Cache: O-key=[8] 'e13a5c0100000000'
	[  +0.000723] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=00000000ae89f91f{9p.inode} n=0000000078612e88
	[  +0.001083] FS-Cache: N-key=[8] 'e13a5c0100000000'
	[  +0.415518] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001073] FS-Cache: O-cookie d=00000000ae89f91f{9p.inode} n=00000000e9178742
	[  +0.001155] FS-Cache: O-key=[8] 'e73a5c0100000000'
	[  +0.000755] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000992] FS-Cache: N-cookie d=00000000ae89f91f{9p.inode} n=0000000087fd2114
	[  +0.001094] FS-Cache: N-key=[8] 'e73a5c0100000000'
	
	
	==> etcd [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18] <==
	{"level":"info","ts":"2024-07-17T19:18:04.277164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-17T19:18:04.277557Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:18:04.277636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:18:04.278105Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:18:04.291704Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-17T19:18:24.233608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.290013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-hr4v9\" ","response":"range_response_count:1 size:4910"}
	{"level":"info","ts":"2024-07-17T19:18:24.233751Z","caller":"traceutil/trace.go:171","msg":"trace[805563241] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-hr4v9; range_end:; response_count:1; response_revision:387; }","duration":"127.59375ms","start":"2024-07-17T19:18:24.106144Z","end":"2024-07-17T19:18:24.233738Z","steps":["trace[805563241] 'range keys from in-memory index tree'  (duration: 126.450047ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:26.992139Z","caller":"traceutil/trace.go:171","msg":"trace[1646014723] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"184.323609ms","start":"2024-07-17T19:18:26.807799Z","end":"2024-07-17T19:18:26.992123Z","steps":["trace[1646014723] 'process raft request'  (duration: 184.228027ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:27.02561Z","caller":"traceutil/trace.go:171","msg":"trace[1511432462] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"177.950155ms","start":"2024-07-17T19:18:26.847644Z","end":"2024-07-17T19:18:27.025594Z","steps":["trace[1511432462] 'process raft request'  (duration: 177.637524ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:27.212832Z","caller":"traceutil/trace.go:171","msg":"trace[2112153299] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"116.687553ms","start":"2024-07-17T19:18:27.096135Z","end":"2024-07-17T19:18:27.212822Z","steps":["trace[2112153299] 'process raft request'  (duration: 116.423094ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:27.212666Z","caller":"traceutil/trace.go:171","msg":"trace[1858015837] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"116.030774ms","start":"2024-07-17T19:18:27.09662Z","end":"2024-07-17T19:18:27.212651Z","steps":["trace[1858015837] 'read index received'  (duration: 115.916534ms)","trace[1858015837] 'applied index is now lower than readState.Index'  (duration: 113.69µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:18:27.322709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.939644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-17T19:18:27.322861Z","caller":"traceutil/trace.go:171","msg":"trace[1256833354] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:407; }","duration":"226.227253ms","start":"2024-07-17T19:18:27.096598Z","end":"2024-07-17T19:18:27.322825Z","steps":["trace[1256833354] 'agreement among raft nodes before linearized reading'  (duration: 163.324441ms)","trace[1256833354] 'range keys from in-memory index tree'  (duration: 32.553837ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T19:18:28.652879Z","caller":"traceutil/trace.go:171","msg":"trace[1352936546] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:454; }","duration":"130.455561ms","start":"2024-07-17T19:18:28.522407Z","end":"2024-07-17T19:18:28.652862Z","steps":["trace[1352936546] 'read index received'  (duration: 22.572793ms)","trace[1352936546] 'applied index is now lower than readState.Index'  (duration: 107.882062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:18:28.654458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.692996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-747597\" ","response":"range_response_count:1 size:5744"}
	{"level":"info","ts":"2024-07-17T19:18:28.70779Z","caller":"traceutil/trace.go:171","msg":"trace[1334954012] range","detail":"{range_begin:/registry/minions/addons-747597; range_end:; response_count:1; response_revision:444; }","duration":"250.679271ms","start":"2024-07-17T19:18:28.457089Z","end":"2024-07-17T19:18:28.707768Z","steps":["trace[1334954012] 'agreement among raft nodes before linearized reading'  (duration: 196.577674ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:28.665065Z","caller":"traceutil/trace.go:171","msg":"trace[64051045] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"149.922906ms","start":"2024-07-17T19:18:28.515106Z","end":"2024-07-17T19:18:28.665029Z","steps":["trace[64051045] 'process raft request'  (duration: 137.532585ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:28.66518Z","caller":"traceutil/trace.go:171","msg":"trace[1437021553] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"127.006434ms","start":"2024-07-17T19:18:28.538165Z","end":"2024-07-17T19:18:28.665171Z","steps":["trace[1437021553] 'process raft request'  (duration: 114.626107ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:28.665258Z","caller":"traceutil/trace.go:171","msg":"trace[1043602737] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"127.016396ms","start":"2024-07-17T19:18:28.538231Z","end":"2024-07-17T19:18:28.665247Z","steps":["trace[1043602737] 'process raft request'  (duration: 114.596849ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:18:28.665355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.438116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-17T19:18:28.708447Z","caller":"traceutil/trace.go:171","msg":"trace[430166174] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:444; }","duration":"196.54185ms","start":"2024-07-17T19:18:28.511896Z","end":"2024-07-17T19:18:28.708438Z","steps":["trace[430166174] 'agreement among raft nodes before linearized reading'  (duration: 153.375929ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:18:28.665394Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.843717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:18:28.708695Z","caller":"traceutil/trace.go:171","msg":"trace[700972935] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:444; }","duration":"226.14265ms","start":"2024-07-17T19:18:28.482544Z","end":"2024-07-17T19:18:28.708687Z","steps":["trace[700972935] 'agreement among raft nodes before linearized reading'  (duration: 182.830383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:18:28.68554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.398973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:18:28.708913Z","caller":"traceutil/trace.go:171","msg":"trace[2028799458] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:452; }","duration":"170.780263ms","start":"2024-07-17T19:18:28.538122Z","end":"2024-07-17T19:18:28.708902Z","steps":["trace[2028799458] 'agreement among raft nodes before linearized reading'  (duration: 147.387304ms)"],"step_count":1}
	
	
	==> gcp-auth [78c53578da4401fc6cac8200a6235fe592d2ca4aa09fe9241ad3608e21567215] <==
	2024/07/17 19:20:19 GCP Auth Webhook started!
	2024/07/17 19:21:13 Ready to marshal response ...
	2024/07/17 19:21:13 Ready to write response ...
	2024/07/17 19:21:13 Ready to marshal response ...
	2024/07/17 19:21:13 Ready to write response ...
	2024/07/17 19:21:13 Ready to marshal response ...
	2024/07/17 19:21:13 Ready to write response ...
	2024/07/17 19:21:23 Ready to marshal response ...
	2024/07/17 19:21:23 Ready to write response ...
	2024/07/17 19:21:30 Ready to marshal response ...
	2024/07/17 19:21:30 Ready to write response ...
	2024/07/17 19:21:30 Ready to marshal response ...
	2024/07/17 19:21:30 Ready to write response ...
	2024/07/17 19:21:40 Ready to marshal response ...
	2024/07/17 19:21:40 Ready to write response ...
	2024/07/17 19:21:45 Ready to marshal response ...
	2024/07/17 19:21:45 Ready to write response ...
	2024/07/17 19:22:13 Ready to marshal response ...
	2024/07/17 19:22:13 Ready to write response ...
	2024/07/17 19:22:36 Ready to marshal response ...
	2024/07/17 19:22:36 Ready to write response ...
	2024/07/17 19:24:55 Ready to marshal response ...
	2024/07/17 19:24:55 Ready to write response ...
	
	
	==> kernel <==
	 19:25:06 up  3:07,  0 users,  load average: 0.14, 0.94, 1.87
	Linux addons-747597 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163] <==
	E0717 19:23:57.571576       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0717 19:23:59.748793       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:23:59.748832       1 main.go:303] handling current node
	W0717 19:24:05.576568       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 19:24:05.576610       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0717 19:24:07.381297       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:24:07.381335       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0717 19:24:09.749100       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:24:09.749139       1 main.go:303] handling current node
	I0717 19:24:19.749150       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:24:19.749184       1 main.go:303] handling current node
	I0717 19:24:29.748737       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:24:29.748856       1 main.go:303] handling current node
	I0717 19:24:39.748944       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:24:39.748984       1 main.go:303] handling current node
	W0717 19:24:48.998447       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 19:24:48.998485       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0717 19:24:49.749280       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:24:49.749314       1 main.go:303] handling current node
	W0717 19:24:58.127528       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 19:24:58.127646       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0717 19:24:59.749062       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:24:59.749097       1 main.go:303] handling current node
	W0717 19:25:01.180451       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:25:01.180490       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kube-apiserver [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957] <==
	I0717 19:21:13.251865       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.59.34"}
	E0717 19:21:41.418310       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 19:21:41.429499       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 19:21:41.442140       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 19:21:56.441181       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 19:21:58.932897       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0717 19:22:01.266943       1 watch.go:250] http2: stream closed
	I0717 19:22:29.947016       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:29.947071       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.002239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:30.002299       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.022272       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:30.022403       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.064335       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:30.066234       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.122675       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:30.127501       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.734774       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 19:22:31.022639       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 19:22:31.194679       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 19:22:31.273735       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0717 19:22:31.763456       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 19:22:36.317670       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 19:22:36.651441       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.125.8"}
	I0717 19:24:55.917985       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.147.0"}
	
	
	==> kube-controller-manager [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00] <==
	E0717 19:23:32.196834       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:23:42.123693       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:23:42.123750       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:24:02.843701       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:24:02.843740       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:24:08.783756       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:24:08.783793       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:24:13.067267       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:24:13.067308       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:24:24.274850       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:24:24.274888       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:24:35.969544       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:24:35.969582       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 19:24:55.711730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="62.08146ms"
	I0717 19:24:55.748395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="36.612683ms"
	I0717 19:24:55.748572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="53.916µs"
	I0717 19:24:57.632017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.483349ms"
	I0717 19:24:57.632256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="40.992µs"
	I0717 19:24:58.313567       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0717 19:24:58.316371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="8.238µs"
	I0717 19:24:58.324581       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0717 19:25:01.348597       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:25:01.348638       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:25:06.822708       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:25:06.822751       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc] <==
	I0717 19:18:30.095480       1 server_linux.go:69] "Using iptables proxy"
	I0717 19:18:30.266721       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 19:18:30.440452       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 19:18:30.440510       1 server_linux.go:165] "Using iptables Proxier"
	I0717 19:18:30.483145       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 19:18:30.483174       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 19:18:30.483198       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:18:30.483480       1 server.go:872] "Version info" version="v1.30.2"
	I0717 19:18:30.483503       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:18:30.485152       1 config.go:319] "Starting node config controller"
	I0717 19:18:30.485271       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:18:30.485609       1 config.go:101] "Starting endpoint slice config controller"
	I0717 19:18:30.486250       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:18:30.486393       1 config.go:192] "Starting service config controller"
	I0717 19:18:30.486427       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:18:30.588780       1 shared_informer.go:320] Caches are synced for service config
	I0717 19:18:30.588870       1 shared_informer.go:320] Caches are synced for node config
	I0717 19:18:30.588901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481] <==
	I0717 19:18:08.179441       1 serving.go:380] Generated self-signed cert in-memory
	I0717 19:18:09.370286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 19:18:09.370453       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:18:09.378422       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 19:18:09.378516       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0717 19:18:09.378525       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0717 19:18:09.378546       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:18:09.379874       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 19:18:09.379963       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 19:18:09.387435       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:18:09.387479       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:18:09.479037       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0717 19:18:09.484710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 19:18:09.487936       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:24:55 addons-747597 kubelet[1547]: I0717 19:24:55.706663    1547 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d82bc2-dbae-4517-9d25-aebfb1795e42" containerName="gadget"
	Jul 17 19:24:55 addons-747597 kubelet[1547]: I0717 19:24:55.789554    1547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/88962bd7-65d5-4b56-9513-c1fe1c7843d3-gcp-creds\") pod \"hello-world-app-6778b5fc9f-9s966\" (UID: \"88962bd7-65d5-4b56-9513-c1fe1c7843d3\") " pod="default/hello-world-app-6778b5fc9f-9s966"
	Jul 17 19:24:55 addons-747597 kubelet[1547]: I0717 19:24:55.789625    1547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxpzr\" (UniqueName: \"kubernetes.io/projected/88962bd7-65d5-4b56-9513-c1fe1c7843d3-kube-api-access-kxpzr\") pod \"hello-world-app-6778b5fc9f-9s966\" (UID: \"88962bd7-65d5-4b56-9513-c1fe1c7843d3\") " pod="default/hello-world-app-6778b5fc9f-9s966"
	Jul 17 19:24:56 addons-747597 kubelet[1547]: I0717 19:24:56.997808    1547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvpkk\" (UniqueName: \"kubernetes.io/projected/21c73a81-efe6-4fc7-b825-b2655ceeaab5-kube-api-access-rvpkk\") pod \"21c73a81-efe6-4fc7-b825-b2655ceeaab5\" (UID: \"21c73a81-efe6-4fc7-b825-b2655ceeaab5\") "
	Jul 17 19:24:57 addons-747597 kubelet[1547]: I0717 19:24:57.011879    1547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c73a81-efe6-4fc7-b825-b2655ceeaab5-kube-api-access-rvpkk" (OuterVolumeSpecName: "kube-api-access-rvpkk") pod "21c73a81-efe6-4fc7-b825-b2655ceeaab5" (UID: "21c73a81-efe6-4fc7-b825-b2655ceeaab5"). InnerVolumeSpecName "kube-api-access-rvpkk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 19:24:57 addons-747597 kubelet[1547]: I0717 19:24:57.099118    1547 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rvpkk\" (UniqueName: \"kubernetes.io/projected/21c73a81-efe6-4fc7-b825-b2655ceeaab5-kube-api-access-rvpkk\") on node \"addons-747597\" DevicePath \"\""
	Jul 17 19:24:57 addons-747597 kubelet[1547]: I0717 19:24:57.598410    1547 scope.go:117] "RemoveContainer" containerID="ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f"
	Jul 17 19:24:57 addons-747597 kubelet[1547]: I0717 19:24:57.622060    1547 scope.go:117] "RemoveContainer" containerID="ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f"
	Jul 17 19:24:57 addons-747597 kubelet[1547]: E0717 19:24:57.622443    1547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f\": container with ID starting with ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f not found: ID does not exist" containerID="ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f"
	Jul 17 19:24:57 addons-747597 kubelet[1547]: I0717 19:24:57.622482    1547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f"} err="failed to get container status \"ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f\": rpc error: code = NotFound desc = could not find container \"ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f\": container with ID starting with ae7d2d20b844ae5f2a751cf14b930f00258dbbd4890294a18a820726f50cdc9f not found: ID does not exist"
	Jul 17 19:24:57 addons-747597 kubelet[1547]: I0717 19:24:57.637101    1547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-9s966" podStartSLOduration=1.5301229200000002 podStartE2EDuration="2.637080879s" podCreationTimestamp="2024-07-17 19:24:55 +0000 UTC" firstStartedPulling="2024-07-17 19:24:56.071272452 +0000 UTC m=+406.171580507" lastFinishedPulling="2024-07-17 19:24:57.178230411 +0000 UTC m=+407.278538466" observedRunningTime="2024-07-17 19:24:57.618658501 +0000 UTC m=+407.718966556" watchObservedRunningTime="2024-07-17 19:24:57.637080879 +0000 UTC m=+407.737388950"
	Jul 17 19:24:58 addons-747597 kubelet[1547]: I0717 19:24:58.055294    1547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c73a81-efe6-4fc7-b825-b2655ceeaab5" path="/var/lib/kubelet/pods/21c73a81-efe6-4fc7-b825-b2655ceeaab5/volumes"
	Jul 17 19:25:00 addons-747597 kubelet[1547]: I0717 19:25:00.126512    1547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135dfbc7-1f33-4ce3-80cc-36e5afe0c11f" path="/var/lib/kubelet/pods/135dfbc7-1f33-4ce3-80cc-36e5afe0c11f/volumes"
	Jul 17 19:25:00 addons-747597 kubelet[1547]: I0717 19:25:00.126977    1547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7153c16-e624-4db2-8244-1fb5a2a6991f" path="/var/lib/kubelet/pods/f7153c16-e624-4db2-8244-1fb5a2a6991f/volumes"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.611475    1547 scope.go:117] "RemoveContainer" containerID="f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.631574    1547 scope.go:117] "RemoveContainer" containerID="f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: E0717 19:25:01.631991    1547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b\": container with ID starting with f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b not found: ID does not exist" containerID="f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.632048    1547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b"} err="failed to get container status \"f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b\": rpc error: code = NotFound desc = could not find container \"f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b\": container with ID starting with f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b not found: ID does not exist"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.645544    1547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncwpb\" (UniqueName: \"kubernetes.io/projected/b8db8990-e740-4420-98d1-f8f1a63f2954-kube-api-access-ncwpb\") pod \"b8db8990-e740-4420-98d1-f8f1a63f2954\" (UID: \"b8db8990-e740-4420-98d1-f8f1a63f2954\") "
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.645604    1547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8db8990-e740-4420-98d1-f8f1a63f2954-webhook-cert\") pod \"b8db8990-e740-4420-98d1-f8f1a63f2954\" (UID: \"b8db8990-e740-4420-98d1-f8f1a63f2954\") "
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.652282    1547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8db8990-e740-4420-98d1-f8f1a63f2954-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b8db8990-e740-4420-98d1-f8f1a63f2954" (UID: "b8db8990-e740-4420-98d1-f8f1a63f2954"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.652292    1547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8db8990-e740-4420-98d1-f8f1a63f2954-kube-api-access-ncwpb" (OuterVolumeSpecName: "kube-api-access-ncwpb") pod "b8db8990-e740-4420-98d1-f8f1a63f2954" (UID: "b8db8990-e740-4420-98d1-f8f1a63f2954"). InnerVolumeSpecName "kube-api-access-ncwpb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.746694    1547 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ncwpb\" (UniqueName: \"kubernetes.io/projected/b8db8990-e740-4420-98d1-f8f1a63f2954-kube-api-access-ncwpb\") on node \"addons-747597\" DevicePath \"\""
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.746744    1547 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8db8990-e740-4420-98d1-f8f1a63f2954-webhook-cert\") on node \"addons-747597\" DevicePath \"\""
	Jul 17 19:25:02 addons-747597 kubelet[1547]: I0717 19:25:02.055536    1547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8db8990-e740-4420-98d1-f8f1a63f2954" path="/var/lib/kubelet/pods/b8db8990-e740-4420-98d1-f8f1a63f2954/volumes"
	
	
	==> storage-provisioner [ba3ec42298409c91cd1c4d66012b52dd46b53202134025a7637490e010f9c8f0] <==
	I0717 19:19:11.184047       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:19:11.217395       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:19:11.217543       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:19:11.240416       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:19:11.240963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b50a7a67-0688-4d65-8776-9a699b69aae9", APIVersion:"v1", ResourceVersion:"956", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-747597_c3e3fa88-7124-4d89-bddd-2e8da3968e26 became leader
	I0717 19:19:11.241085       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-747597_c3e3fa88-7124-4d89-bddd-2e8da3968e26!
	I0717 19:19:11.341835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-747597_c3e3fa88-7124-4d89-bddd-2e8da3968e26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-747597 -n addons-747597
helpers_test.go:261: (dbg) Run:  kubectl --context addons-747597 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (319.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.939793ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-m2zcj" [ecfedd7e-e869-4dd1-b482-62f0706cc601] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007413685s
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (148.301267ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 4m13.62541104s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (95.390941ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 4m17.204123011s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (136.504641ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 4m22.829987609s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (98.250296ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 4m31.526302697s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (89.9174ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 4m41.150648265s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (104.579516ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 4m58.389251376s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (87.588064ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 5m11.965685848s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (85.80847ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 5m45.979794005s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (101.481466ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 6m39.326705226s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (88.250148ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 8m3.623890609s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-747597 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-747597 top pods -n kube-system: exit status 1 (93.213983ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-vx2ls, age: 9m23.738140485s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-747597
helpers_test.go:235: (dbg) docker inspect addons-747597:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455",
	        "Created": "2024-07-17T19:17:46.119548905Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 596661,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T19:17:46.264652095Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:476b38520acaa45848ac08864bd6ca4a7124b7e691863e24807ecda76b00d113",
	        "ResolvConfPath": "/var/lib/docker/containers/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455/hostname",
	        "HostsPath": "/var/lib/docker/containers/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455/hosts",
	        "LogPath": "/var/lib/docker/containers/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455/dda8db92681d1c2d0a202cdffd2c4a7eb8a15dc30999618bc251c7303a6b7455-json.log",
	        "Name": "/addons-747597",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-747597:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-747597",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a6321df0310d2b7617b61367a822b507dcb7b6f24a118d9134a5c0f737bcdf3b-init/diff:/var/lib/docker/overlay2/565efae8277f893e1a3772eb51129c6122836d34f0368ed890f207f355d67a18/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a6321df0310d2b7617b61367a822b507dcb7b6f24a118d9134a5c0f737bcdf3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a6321df0310d2b7617b61367a822b507dcb7b6f24a118d9134a5c0f737bcdf3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a6321df0310d2b7617b61367a822b507dcb7b6f24a118d9134a5c0f737bcdf3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-747597",
	                "Source": "/var/lib/docker/volumes/addons-747597/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-747597",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-747597",
	                "name.minikube.sigs.k8s.io": "addons-747597",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d626e00ee6a37211769924797a6438dbe14f526af44275b8e7c651b68301959a",
	            "SandboxKey": "/var/run/docker/netns/d626e00ee6a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-747597": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "59fab1ccd3de0618adab634ca644a1d762012d21123cf13746cc98801bca43f9",
	                    "EndpointID": "cf04ef327c7a1daf7bc36516cfa460ddb82b3b0cfe0f096f72b636f0283866ba",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-747597",
	                        "dda8db92681d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-747597 -n addons-747597
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-747597 logs -n 25: (1.54455644s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-186638                                                                     | download-only-186638   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| delete  | -p download-only-639410                                                                     | download-only-639410   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| delete  | -p download-only-902211                                                                     | download-only-902211   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | --download-only -p                                                                          | download-docker-114745 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | download-docker-114745                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-114745                                                                   | download-docker-114745 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-794463   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | binary-mirror-794463                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35105                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-794463                                                                     | binary-mirror-794463   | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| addons  | enable dashboard -p                                                                         | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | addons-747597                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | addons-747597                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-747597 --wait=true                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | -p addons-747597                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-747597 ip                                                                            | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	| addons  | addons-747597 addons disable                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | -p addons-747597                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-747597 ssh cat                                                                       | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | /opt/local-path-provisioner/pvc-e6f3f4fe-8b6e-4e46-a13c-533c45ae5ad4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-747597 addons disable                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:21 UTC | 17 Jul 24 19:21 UTC |
	|         | addons-747597                                                                               |                        |         |         |                     |                     |
	| addons  | addons-747597 addons                                                                        | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:22 UTC | 17 Jul 24 19:22 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-747597 addons                                                                        | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:22 UTC | 17 Jul 24 19:22 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:22 UTC | 17 Jul 24 19:22 UTC |
	|         | addons-747597                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-747597 ssh curl -s                                                                   | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:22 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-747597 ip                                                                            | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| addons  | addons-747597 addons disable                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-747597 addons disable                                                                | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-747597 addons                                                                        | addons-747597          | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:27 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:17:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:17:21.718625  596166 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:17:21.718784  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:21.718801  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:17:21.718808  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:21.719046  596166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:17:21.719520  596166 out.go:298] Setting JSON to false
	I0717 19:17:21.720413  596166 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10785,"bootTime":1721233057,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0717 19:17:21.720493  596166 start.go:139] virtualization:  
	I0717 19:17:21.723241  596166 out.go:177] * [addons-747597] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 19:17:21.725161  596166 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 19:17:21.725228  596166 notify.go:220] Checking for updates...
	I0717 19:17:21.729208  596166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:17:21.731175  596166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:17:21.732979  596166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	I0717 19:17:21.734754  596166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 19:17:21.737008  596166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:17:21.738955  596166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:17:21.760370  596166 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 19:17:21.760497  596166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:21.824655  596166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 19:17:21.815605673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:17:21.824801  596166 docker.go:307] overlay module found
	I0717 19:17:21.828054  596166 out.go:177] * Using the docker driver based on user configuration
	I0717 19:17:21.829931  596166 start.go:297] selected driver: docker
	I0717 19:17:21.829950  596166 start.go:901] validating driver "docker" against <nil>
	I0717 19:17:21.829965  596166 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:17:21.830594  596166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:21.879901  596166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 19:17:21.871112628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:17:21.880062  596166 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 19:17:21.880293  596166 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:17:21.882569  596166 out.go:177] * Using Docker driver with root privileges
	I0717 19:17:21.884990  596166 cni.go:84] Creating CNI manager for ""
	I0717 19:17:21.885015  596166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:21.885030  596166 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 19:17:21.885135  596166 start.go:340] cluster config:
	{Name:addons-747597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:17:21.888891  596166 out.go:177] * Starting "addons-747597" primary control-plane node in "addons-747597" cluster
	I0717 19:17:21.890888  596166 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 19:17:21.892749  596166 out.go:177] * Pulling base image v0.0.44-1721146479-19264 ...
	I0717 19:17:21.894711  596166 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:17:21.894757  596166 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0717 19:17:21.894772  596166 cache.go:56] Caching tarball of preloaded images
	I0717 19:17:21.894797  596166 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 19:17:21.894853  596166 preload.go:172] Found /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0717 19:17:21.894863  596166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 19:17:21.895202  596166 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/config.json ...
	I0717 19:17:21.895232  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/config.json: {Name:mk00e7f571c60a530945c6cef35ba32aa47eea2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:21.913432  596166 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 19:17:21.913582  596166 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 19:17:21.913603  596166 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 19:17:21.913608  596166 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 19:17:21.913616  596166 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	I0717 19:17:21.913622  596166 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from local cache
	I0717 19:17:38.578202  596166 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from cached tarball
	I0717 19:17:38.578236  596166 cache.go:194] Successfully downloaded all kic artifacts
	I0717 19:17:38.578279  596166 start.go:360] acquireMachinesLock for addons-747597: {Name:mkfb0f489a4eb78a4e21cfb654d8f2daf2a9477b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:38.578778  596166 start.go:364] duration metric: took 462.761µs to acquireMachinesLock for "addons-747597"
	I0717 19:17:38.578818  596166 start.go:93] Provisioning new machine with config: &{Name:addons-747597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:17:38.578910  596166 start.go:125] createHost starting for "" (driver="docker")
	I0717 19:17:38.581230  596166 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 19:17:38.581524  596166 start.go:159] libmachine.API.Create for "addons-747597" (driver="docker")
	I0717 19:17:38.581566  596166 client.go:168] LocalClient.Create starting
	I0717 19:17:38.581714  596166 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem
	I0717 19:17:39.073257  596166 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/cert.pem
	I0717 19:17:39.610837  596166 cli_runner.go:164] Run: docker network inspect addons-747597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 19:17:39.625891  596166 cli_runner.go:211] docker network inspect addons-747597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 19:17:39.625982  596166 network_create.go:284] running [docker network inspect addons-747597] to gather additional debugging logs...
	I0717 19:17:39.626004  596166 cli_runner.go:164] Run: docker network inspect addons-747597
	W0717 19:17:39.641731  596166 cli_runner.go:211] docker network inspect addons-747597 returned with exit code 1
	I0717 19:17:39.641761  596166 network_create.go:287] error running [docker network inspect addons-747597]: docker network inspect addons-747597: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-747597 not found
	I0717 19:17:39.641774  596166 network_create.go:289] output of [docker network inspect addons-747597]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-747597 not found
	
	** /stderr **
	I0717 19:17:39.641871  596166 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:39.657575  596166 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000478780}
	I0717 19:17:39.657618  596166 network_create.go:124] attempt to create docker network addons-747597 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 19:17:39.657674  596166 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-747597 addons-747597
	I0717 19:17:39.726631  596166 network_create.go:108] docker network addons-747597 192.168.49.0/24 created
	I0717 19:17:39.726662  596166 kic.go:121] calculated static IP "192.168.49.2" for the "addons-747597" container
	I0717 19:17:39.726748  596166 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 19:17:39.739891  596166 cli_runner.go:164] Run: docker volume create addons-747597 --label name.minikube.sigs.k8s.io=addons-747597 --label created_by.minikube.sigs.k8s.io=true
	I0717 19:17:39.756280  596166 oci.go:103] Successfully created a docker volume addons-747597
	I0717 19:17:39.756372  596166 cli_runner.go:164] Run: docker run --rm --name addons-747597-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747597 --entrypoint /usr/bin/test -v addons-747597:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib
	I0717 19:17:41.831592  596166 cli_runner.go:217] Completed: docker run --rm --name addons-747597-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747597 --entrypoint /usr/bin/test -v addons-747597:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib: (2.075178303s)
	I0717 19:17:41.831619  596166 oci.go:107] Successfully prepared a docker volume addons-747597
	I0717 19:17:41.831645  596166 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:17:41.831721  596166 kic.go:194] Starting extracting preloaded images to volume ...
	I0717 19:17:41.831819  596166 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747597:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 19:17:46.046721  596166 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747597:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir: (4.214853601s)
	I0717 19:17:46.046755  596166 kic.go:203] duration metric: took 4.21508633s to extract preloaded images to volume ...
	W0717 19:17:46.046911  596166 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 19:17:46.047031  596166 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 19:17:46.106111  596166 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-747597 --name addons-747597 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747597 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-747597 --network addons-747597 --ip 192.168.49.2 --volume addons-747597:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e
	I0717 19:17:46.423019  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Running}}
	I0717 19:17:46.440159  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:17:46.461943  596166 cli_runner.go:164] Run: docker exec addons-747597 stat /var/lib/dpkg/alternatives/iptables
	I0717 19:17:46.522898  596166 oci.go:144] the created container "addons-747597" has a running status.
	I0717 19:17:46.522934  596166 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa...
	I0717 19:17:46.884540  596166 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 19:17:46.912990  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:17:46.936762  596166 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 19:17:46.936784  596166 kic_runner.go:114] Args: [docker exec --privileged addons-747597 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 19:17:47.033515  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:17:47.062218  596166 machine.go:94] provisionDockerMachine start ...
	I0717 19:17:47.062320  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:47.092868  596166 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:47.093156  596166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0717 19:17:47.093172  596166 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:17:47.267483  596166 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747597
	
	I0717 19:17:47.267508  596166 ubuntu.go:169] provisioning hostname "addons-747597"
	I0717 19:17:47.267579  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:47.284221  596166 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:47.284487  596166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0717 19:17:47.284504  596166 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-747597 && echo "addons-747597" | sudo tee /etc/hostname
	I0717 19:17:47.437281  596166 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747597
	
	I0717 19:17:47.437445  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:47.457540  596166 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:47.457805  596166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0717 19:17:47.457822  596166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-747597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-747597/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-747597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:17:47.599468  596166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:17:47.599537  596166 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19283-589755/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-589755/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-589755/.minikube}
	I0717 19:17:47.599574  596166 ubuntu.go:177] setting up certificates
	I0717 19:17:47.599618  596166 provision.go:84] configureAuth start
	I0717 19:17:47.599704  596166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747597
	I0717 19:17:47.616323  596166 provision.go:143] copyHostCerts
	I0717 19:17:47.616412  596166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-589755/.minikube/ca.pem (1082 bytes)
	I0717 19:17:47.616534  596166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-589755/.minikube/cert.pem (1123 bytes)
	I0717 19:17:47.616594  596166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-589755/.minikube/key.pem (1679 bytes)
	I0717 19:17:47.616645  596166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-589755/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca-key.pem org=jenkins.addons-747597 san=[127.0.0.1 192.168.49.2 addons-747597 localhost minikube]
	I0717 19:17:47.980472  596166 provision.go:177] copyRemoteCerts
	I0717 19:17:47.980554  596166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:17:47.980597  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:47.998053  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.098632  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:17:48.125050  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 19:17:48.149791  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:17:48.174060  596166 provision.go:87] duration metric: took 574.408495ms to configureAuth
	I0717 19:17:48.174086  596166 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:17:48.174282  596166 config.go:182] Loaded profile config "addons-747597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:17:48.174381  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.190740  596166 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:48.190986  596166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I0717 19:17:48.191004  596166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:48.434333  596166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:48.434354  596166 machine.go:97] duration metric: took 1.372115681s to provisionDockerMachine
	I0717 19:17:48.434365  596166 client.go:171] duration metric: took 9.85278888s to LocalClient.Create
	I0717 19:17:48.434377  596166 start.go:167] duration metric: took 9.852854545s to libmachine.API.Create "addons-747597"
	I0717 19:17:48.434385  596166 start.go:293] postStartSetup for "addons-747597" (driver="docker")
	I0717 19:17:48.434396  596166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:48.434463  596166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:48.434528  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.451341  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.549041  596166 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:48.552325  596166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:48.552361  596166 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:48.552372  596166 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:48.552379  596166 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 19:17:48.552390  596166 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-589755/.minikube/addons for local assets ...
	I0717 19:17:48.552462  596166 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-589755/.minikube/files for local assets ...
	I0717 19:17:48.552489  596166 start.go:296] duration metric: took 118.098648ms for postStartSetup
	I0717 19:17:48.552811  596166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747597
	I0717 19:17:48.571281  596166 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/config.json ...
	I0717 19:17:48.571600  596166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:48.571668  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.587100  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.684072  596166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:48.688287  596166 start.go:128] duration metric: took 10.109360386s to createHost
	I0717 19:17:48.688314  596166 start.go:83] releasing machines lock for "addons-747597", held for 10.109516587s
	I0717 19:17:48.688386  596166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747597
	I0717 19:17:48.704731  596166 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:48.704788  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.705043  596166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:48.705107  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:17:48.722875  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.726364  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:17:48.957666  596166 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:48.961976  596166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:49.100914  596166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:49.105049  596166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:49.128053  596166 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:49.128167  596166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:49.163861  596166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 19:17:49.163886  596166 start.go:495] detecting cgroup driver to use...
	I0717 19:17:49.163920  596166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:49.163971  596166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:49.179956  596166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:49.192127  596166 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:17:49.192240  596166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:49.206596  596166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:49.221085  596166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:49.304884  596166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:49.400214  596166 docker.go:233] disabling docker service ...
	I0717 19:17:49.400322  596166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:49.421276  596166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:49.433421  596166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:49.516786  596166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:49.617837  596166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:49.630131  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:49.648908  596166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:17:49.649005  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.660263  596166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:49.660396  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.670375  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.680512  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.691188  596166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:49.700910  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.711310  596166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.727441  596166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:49.738548  596166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:49.748376  596166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:49.757501  596166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:49.848042  596166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:49.962286  596166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:49.962397  596166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:49.967033  596166 start.go:563] Will wait 60s for crictl version
	I0717 19:17:49.967119  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:17:49.970262  596166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:50.016155  596166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:50.016305  596166 ssh_runner.go:195] Run: crio --version
	I0717 19:17:50.056016  596166 ssh_runner.go:195] Run: crio --version
	I0717 19:17:50.098760  596166 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0717 19:17:50.100943  596166 cli_runner.go:164] Run: docker network inspect addons-747597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:50.117732  596166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:50.121543  596166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:50.132878  596166 kubeadm.go:883] updating cluster {Name:addons-747597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:17:50.133005  596166 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:17:50.133072  596166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:50.210954  596166 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:17:50.210975  596166 crio.go:433] Images already preloaded, skipping extraction
	I0717 19:17:50.211034  596166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:50.247025  596166 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:17:50.247050  596166 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:17:50.247059  596166 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0717 19:17:50.247153  596166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-747597 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:17:50.247236  596166 ssh_runner.go:195] Run: crio config
	I0717 19:17:50.311123  596166 cni.go:84] Creating CNI manager for ""
	I0717 19:17:50.311156  596166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:50.311168  596166 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:17:50.311207  596166 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-747597 NodeName:addons-747597 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:17:50.311420  596166 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-747597"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:17:50.311513  596166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:17:50.320537  596166 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:17:50.320616  596166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:17:50.329154  596166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0717 19:17:50.346703  596166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:17:50.364103  596166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0717 19:17:50.382440  596166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:17:50.385636  596166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:50.395914  596166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:50.483524  596166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:17:50.497244  596166 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597 for IP: 192.168.49.2
	I0717 19:17:50.497307  596166 certs.go:194] generating shared ca certs ...
	I0717 19:17:50.497338  596166 certs.go:226] acquiring lock for ca certs: {Name:mkc7f7593d6d49a6ae6b1662b77cfee02ea809e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:50.497897  596166 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-589755/.minikube/ca.key
	I0717 19:17:50.833850  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt ...
	I0717 19:17:50.833890  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt: {Name:mka5f97aa1d51e6f0603d75c5f9a2b330dc025e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:50.834786  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/ca.key ...
	I0717 19:17:50.834803  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/ca.key: {Name:mkf4b159ab3cd3d5e3d249a2fff3bc33a90d072b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:50.835245  596166 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.key
	I0717 19:17:51.293004  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.crt ...
	I0717 19:17:51.293034  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.crt: {Name:mke165fa6523e843211ded021898033e2404971f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:51.293215  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.key ...
	I0717 19:17:51.293227  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.key: {Name:mkccbf8c4d24d8d21caa3e31ecf8f6434f64f5a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:51.293309  596166 certs.go:256] generating profile certs ...
	I0717 19:17:51.293369  596166 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.key
	I0717 19:17:51.293387  596166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt with IP's: []
	I0717 19:17:52.093747  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt ...
	I0717 19:17:52.093826  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: {Name:mk57e121424289d3fe721af9c3e61bbb5d304f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.094656  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.key ...
	I0717 19:17:52.094707  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.key: {Name:mk1cd8ec5af29b222ec8b05308a6edb27d080927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.095303  596166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key.01a7d43c
	I0717 19:17:52.095359  596166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt.01a7d43c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0717 19:17:52.650684  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt.01a7d43c ...
	I0717 19:17:52.650767  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt.01a7d43c: {Name:mka4ee7d76916177a0049e09d0b7e9952971bef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.651405  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key.01a7d43c ...
	I0717 19:17:52.651453  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key.01a7d43c: {Name:mk7b05f553b03554eebba85a801f758f4511ec95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.651598  596166 certs.go:381] copying /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt.01a7d43c -> /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt
	I0717 19:17:52.651736  596166 certs.go:385] copying /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key.01a7d43c -> /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key
	I0717 19:17:52.651865  596166 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.key
	I0717 19:17:52.651906  596166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.crt with IP's: []
	I0717 19:17:52.809070  596166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.crt ...
	I0717 19:17:52.809144  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.crt: {Name:mk7f7c357585a783a146f3fa02fe902a6a53dd99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.809358  596166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.key ...
	I0717 19:17:52.809403  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.key: {Name:mk893a3c07b663871f150545a797c3bccf86b1e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:52.809652  596166 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:17:52.809732  596166 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:17:52.809793  596166 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:17:52.809842  596166 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-589755/.minikube/certs/key.pem (1679 bytes)
	I0717 19:17:52.810599  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:17:52.855451  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:17:52.900836  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:17:52.925071  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:17:52.948565  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 19:17:52.972384  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:17:52.996470  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:17:53.023900  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:17:53.048535  596166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:17:53.073058  596166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:17:53.091329  596166 ssh_runner.go:195] Run: openssl version
	I0717 19:17:53.097560  596166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:17:53.107235  596166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:53.110786  596166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:17 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:53.110898  596166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:53.117930  596166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:17:53.127690  596166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:17:53.131093  596166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 19:17:53.131170  596166 kubeadm.go:392] StartCluster: {Name:addons-747597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-747597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:17:53.131263  596166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:17:53.131330  596166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:53.169701  596166 cri.go:89] found id: ""
	I0717 19:17:53.169809  596166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:17:53.178427  596166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:53.187328  596166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0717 19:17:53.187433  596166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:17:53.196428  596166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:17:53.196447  596166 kubeadm.go:157] found existing configuration files:
	
	I0717 19:17:53.196526  596166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:17:53.205320  596166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:17:53.205388  596166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:17:53.213593  596166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:17:53.222394  596166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:17:53.222482  596166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:17:53.230790  596166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:17:53.239388  596166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:17:53.239449  596166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:17:53.247823  596166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:17:53.256552  596166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:17:53.256644  596166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:17:53.264981  596166 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 19:17:53.308984  596166 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:17:53.309249  596166 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:17:53.349570  596166 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0717 19:17:53.349691  596166 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1064-aws
	I0717 19:17:53.349770  596166 kubeadm.go:310] OS: Linux
	I0717 19:17:53.349843  596166 kubeadm.go:310] CGROUPS_CPU: enabled
	I0717 19:17:53.349913  596166 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0717 19:17:53.349994  596166 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0717 19:17:53.350060  596166 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0717 19:17:53.350121  596166 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0717 19:17:53.350176  596166 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0717 19:17:53.350223  596166 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0717 19:17:53.350274  596166 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0717 19:17:53.350323  596166 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0717 19:17:53.417468  596166 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:17:53.417764  596166 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:17:53.417908  596166 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:17:53.663828  596166 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:17:53.666617  596166 out.go:204]   - Generating certificates and keys ...
	I0717 19:17:53.666749  596166 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:17:53.666829  596166 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:17:54.081589  596166 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:17:54.666824  596166 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:17:55.066585  596166 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:17:55.329265  596166 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 19:17:56.287066  596166 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 19:17:56.287286  596166 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-747597 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 19:17:56.688200  596166 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 19:17:56.688568  596166 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-747597 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 19:17:56.887065  596166 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:17:57.566148  596166 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:17:58.037853  596166 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 19:17:58.038044  596166 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:17:58.402627  596166 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:17:59.501928  596166 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:17:59.718924  596166 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:18:00.454091  596166 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:18:00.677742  596166 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:18:00.678525  596166 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:18:00.683209  596166 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:18:00.685732  596166 out.go:204]   - Booting up control plane ...
	I0717 19:18:00.685850  596166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:18:00.685930  596166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:18:00.686717  596166 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:18:00.697193  596166 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:18:00.698347  596166 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:18:00.698400  596166 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:18:00.790178  596166 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:18:00.790287  596166 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:18:02.792048  596166 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001914624s
	I0717 19:18:02.792140  596166 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:18:09.293554  596166 kubeadm.go:310] [api-check] The API server is healthy after 6.501726692s
	I0717 19:18:09.318470  596166 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:18:09.336379  596166 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:18:09.374182  596166 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:18:09.374377  596166 kubeadm.go:310] [mark-control-plane] Marking the node addons-747597 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:18:09.390994  596166 kubeadm.go:310] [bootstrap-token] Using token: hqg7j9.p48nu7eegj1iucst
	I0717 19:18:09.393145  596166 out.go:204]   - Configuring RBAC rules ...
	I0717 19:18:09.393293  596166 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:18:09.409913  596166 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:18:09.419668  596166 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:18:09.423609  596166 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:18:09.427647  596166 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:18:09.431924  596166 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:18:09.700648  596166 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:18:10.158526  596166 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:18:10.700600  596166 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:18:10.701747  596166 kubeadm.go:310] 
	I0717 19:18:10.701821  596166 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:18:10.701834  596166 kubeadm.go:310] 
	I0717 19:18:10.701911  596166 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:18:10.701922  596166 kubeadm.go:310] 
	I0717 19:18:10.701948  596166 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:18:10.702008  596166 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:18:10.702059  596166 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:18:10.702067  596166 kubeadm.go:310] 
	I0717 19:18:10.702119  596166 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:18:10.702127  596166 kubeadm.go:310] 
	I0717 19:18:10.702172  596166 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:18:10.702180  596166 kubeadm.go:310] 
	I0717 19:18:10.702230  596166 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:18:10.702305  596166 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:18:10.702375  596166 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:18:10.702382  596166 kubeadm.go:310] 
	I0717 19:18:10.702464  596166 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:18:10.702559  596166 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:18:10.702567  596166 kubeadm.go:310] 
	I0717 19:18:10.702653  596166 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hqg7j9.p48nu7eegj1iucst \
	I0717 19:18:10.702754  596166 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:92bc4c9c8cac954f78c64a34e7c101c21493fd8a72d692c72f057161814bfde5 \
	I0717 19:18:10.702777  596166 kubeadm.go:310] 	--control-plane 
	I0717 19:18:10.702787  596166 kubeadm.go:310] 
	I0717 19:18:10.702869  596166 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:18:10.702876  596166 kubeadm.go:310] 
	I0717 19:18:10.702956  596166 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hqg7j9.p48nu7eegj1iucst \
	I0717 19:18:10.703056  596166 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:92bc4c9c8cac954f78c64a34e7c101c21493fd8a72d692c72f057161814bfde5 
	I0717 19:18:10.706386  596166 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
	I0717 19:18:10.706555  596166 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:18:10.706580  596166 cni.go:84] Creating CNI manager for ""
	I0717 19:18:10.706604  596166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:18:10.708935  596166 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 19:18:10.710892  596166 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:18:10.714420  596166 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 19:18:10.714439  596166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:18:10.732799  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:18:11.032413  596166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:18:11.032495  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:11.032557  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-747597 minikube.k8s.io/updated_at=2024_07_17T19_18_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=addons-747597 minikube.k8s.io/primary=true
	I0717 19:18:11.190016  596166 ops.go:34] apiserver oom_adj: -16
	I0717 19:18:11.190140  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:11.690887  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:12.190903  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:12.690943  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:13.190838  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:13.690604  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:14.191111  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:14.690331  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:15.191243  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:15.690974  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:16.190522  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:16.691212  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:17.190699  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:17.690273  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:18.190682  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:18.690278  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:19.190921  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:19.691006  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:20.190763  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:20.690799  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:21.190936  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:21.690283  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:22.190356  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:22.690898  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:23.190974  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:23.690729  596166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:18:23.847573  596166 kubeadm.go:1113] duration metric: took 12.81514787s to wait for elevateKubeSystemPrivileges
	I0717 19:18:23.847603  596166 kubeadm.go:394] duration metric: took 30.716463909s to StartCluster
	I0717 19:18:23.847622  596166 settings.go:142] acquiring lock: {Name:mkb34a92534e6ebb88b1dc61f5cef4e8adaa41ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:18:23.848429  596166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:18:23.848910  596166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-589755/kubeconfig: {Name:mk6ca856576f3a45e2fc0d3c3f561dd766d29da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:18:23.849112  596166 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:18:23.849210  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:18:23.849468  596166 config.go:182] Loaded profile config "addons-747597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:18:23.849500  596166 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 19:18:23.849594  596166 addons.go:69] Setting yakd=true in profile "addons-747597"
	I0717 19:18:23.849622  596166 addons.go:234] Setting addon yakd=true in "addons-747597"
	I0717 19:18:23.849649  596166 addons.go:69] Setting cloud-spanner=true in profile "addons-747597"
	I0717 19:18:23.849706  596166 addons.go:234] Setting addon cloud-spanner=true in "addons-747597"
	I0717 19:18:23.849758  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.849767  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.850209  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.850333  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.849621  596166 addons.go:69] Setting ingress=true in profile "addons-747597"
	I0717 19:18:23.850771  596166 addons.go:234] Setting addon ingress=true in "addons-747597"
	I0717 19:18:23.850810  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.851215  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.852091  596166 out.go:177] * Verifying Kubernetes components...
	I0717 19:18:23.852278  596166 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-747597"
	I0717 19:18:23.852349  596166 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-747597"
	I0717 19:18:23.852380  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.852789  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.853937  596166 addons.go:69] Setting default-storageclass=true in profile "addons-747597"
	I0717 19:18:23.853981  596166 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-747597"
	I0717 19:18:23.854261  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.863417  596166 addons.go:69] Setting gcp-auth=true in profile "addons-747597"
	I0717 19:18:23.863479  596166 mustload.go:65] Loading cluster: addons-747597
	I0717 19:18:23.863664  596166 config.go:182] Loaded profile config "addons-747597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:18:23.863908  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.865761  596166 addons.go:69] Setting ingress-dns=true in profile "addons-747597"
	I0717 19:18:23.865799  596166 addons.go:234] Setting addon ingress-dns=true in "addons-747597"
	I0717 19:18:23.865852  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.866249  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.870246  596166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:18:23.875944  596166 addons.go:69] Setting inspektor-gadget=true in profile "addons-747597"
	I0717 19:18:23.875984  596166 addons.go:234] Setting addon inspektor-gadget=true in "addons-747597"
	I0717 19:18:23.876022  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.876459  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.890219  596166 addons.go:69] Setting metrics-server=true in profile "addons-747597"
	I0717 19:18:23.890258  596166 addons.go:234] Setting addon metrics-server=true in "addons-747597"
	I0717 19:18:23.890293  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.890752  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.908403  596166 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-747597"
	I0717 19:18:23.908446  596166 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-747597"
	I0717 19:18:23.908490  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.908958  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.927535  596166 addons.go:69] Setting volcano=true in profile "addons-747597"
	I0717 19:18:23.927632  596166 addons.go:234] Setting addon volcano=true in "addons-747597"
	I0717 19:18:23.927702  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.936346  596166 addons.go:69] Setting volumesnapshots=true in profile "addons-747597"
	I0717 19:18:23.936394  596166 addons.go:234] Setting addon volumesnapshots=true in "addons-747597"
	I0717 19:18:23.936433  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.936863  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:23.959532  596166 addons.go:69] Setting registry=true in profile "addons-747597"
	I0717 19:18:23.960164  596166 addons.go:234] Setting addon registry=true in "addons-747597"
	I0717 19:18:23.960235  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:23.960723  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.004303  596166 addons.go:69] Setting storage-provisioner=true in profile "addons-747597"
	I0717 19:18:24.004401  596166 addons.go:234] Setting addon storage-provisioner=true in "addons-747597"
	I0717 19:18:24.004473  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:24.004920  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.004999  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.011484  596166 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-747597"
	I0717 19:18:24.024247  596166 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-747597"
	I0717 19:18:24.024609  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.013508  596166 addons.go:234] Setting addon default-storageclass=true in "addons-747597"
	I0717 19:18:24.033026  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:24.033501  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.071255  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 19:18:24.075506  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 19:18:24.077335  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 19:18:24.100228  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 19:18:24.100477  596166 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 19:18:24.102678  596166 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 19:18:24.104537  596166 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 19:18:24.104561  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 19:18:24.104628  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.108885  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 19:18:24.109302  596166 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 19:18:24.111731  596166 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 19:18:24.111757  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 19:18:24.111827  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.120101  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:24.127480  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 19:18:24.127926  596166 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 19:18:24.102689  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 19:18:24.128733  596166 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 19:18:24.128804  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.143190  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 19:18:24.144993  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 19:18:24.149191  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 19:18:24.149222  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 19:18:24.149303  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.166296  596166 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 19:18:24.166359  596166 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 19:18:24.170293  596166 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 19:18:24.173677  596166 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 19:18:24.176162  596166 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 19:18:24.176186  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 19:18:24.176259  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.180843  596166 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 19:18:24.180865  596166 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 19:18:24.180934  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.203016  596166 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 19:18:24.203145  596166 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 19:18:24.203915  596166 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:18:24.203942  596166 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:18:24.204007  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.204868  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 19:18:24.204882  596166 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 19:18:24.204940  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.229290  596166 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 19:18:24.235573  596166 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 19:18:24.235605  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 19:18:24.235690  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.238703  596166 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 19:18:24.242442  596166 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-747597"
	I0717 19:18:24.242484  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:24.243258  596166 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 19:18:24.243277  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0717 19:18:24.243339  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	W0717 19:18:24.253831  596166 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 19:18:24.256300  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:24.272506  596166 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:18:24.272525  596166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:18:24.272594  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.283501  596166 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:18:24.289858  596166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:18:24.290309  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 19:18:24.295509  596166 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:18:24.295534  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:18:24.295601  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.312938  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.386237  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.409054  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.419582  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.427687  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.431474  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.431927  596166 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 19:18:24.434160  596166 out.go:177]   - Using image docker.io/busybox:stable
	I0717 19:18:24.436392  596166 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 19:18:24.436414  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 19:18:24.436478  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:24.467527  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.476494  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.477234  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.487680  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.491452  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.493970  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.508461  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:24.866387  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 19:18:24.869296  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 19:18:24.892469  596166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:18:24.892539  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 19:18:24.900822  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 19:18:24.925844  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 19:18:24.925916  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 19:18:24.935864  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 19:18:24.935945  596166 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 19:18:24.946496  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:18:24.951851  596166 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 19:18:24.951927  596166 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 19:18:24.955283  596166 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 19:18:24.955372  596166 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 19:18:24.993903  596166 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 19:18:24.993989  596166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 19:18:24.999205  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 19:18:25.005110  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:18:25.043208  596166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:18:25.043290  596166 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:18:25.046532  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 19:18:25.081867  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 19:18:25.081955  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 19:18:25.124527  596166 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 19:18:25.124596  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 19:18:25.153742  596166 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 19:18:25.153821  596166 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 19:18:25.157527  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 19:18:25.157611  596166 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 19:18:25.161417  596166 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 19:18:25.161501  596166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 19:18:25.200266  596166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:18:25.200345  596166 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:18:25.302537  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 19:18:25.302629  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 19:18:25.313977  596166 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 19:18:25.314052  596166 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 19:18:25.344887  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 19:18:25.374671  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 19:18:25.374697  596166 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 19:18:25.390940  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:18:25.397846  596166 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 19:18:25.397920  596166 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 19:18:25.477894  596166 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 19:18:25.477965  596166 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 19:18:25.509396  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 19:18:25.509473  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 19:18:25.544168  596166 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 19:18:25.544242  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 19:18:25.550883  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 19:18:25.550956  596166 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 19:18:25.613718  596166 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 19:18:25.613789  596166 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 19:18:25.663232  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 19:18:25.679914  596166 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 19:18:25.680003  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 19:18:25.720539  596166 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 19:18:25.720617  596166 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 19:18:25.777277  596166 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 19:18:25.777350  596166 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 19:18:25.827946  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 19:18:25.833583  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 19:18:25.833653  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 19:18:25.845964  596166 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 19:18:25.846024  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 19:18:25.959662  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 19:18:25.979524  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 19:18:25.979598  596166 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 19:18:26.157654  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 19:18:26.157725  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 19:18:26.300795  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 19:18:26.300866  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 19:18:26.415685  596166 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 19:18:26.415757  596166 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 19:18:26.524213  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 19:18:27.319528  596166 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.029636665s)
	I0717 19:18:27.320596  596166 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.030267457s)
	I0717 19:18:27.320656  596166 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 19:18:27.320548  596166 node_ready.go:35] waiting up to 6m0s for node "addons-747597" to be "Ready" ...
	I0717 19:18:28.396203  596166 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-747597" context rescaled to 1 replicas
	I0717 19:18:29.252654  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.386181259s)
	I0717 19:18:29.252715  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.383353408s)
	I0717 19:18:29.388190  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:29.747335  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.846441641s)
	I0717 19:18:29.747614  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.801047662s)
	I0717 19:18:30.795202  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.795918173s)
	I0717 19:18:30.795237  596166 addons.go:475] Verifying addon ingress=true in "addons-747597"
	I0717 19:18:30.795418  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.790226384s)
	I0717 19:18:30.795588  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.748996903s)
	I0717 19:18:30.795624  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.450668132s)
	I0717 19:18:30.795634  596166 addons.go:475] Verifying addon registry=true in "addons-747597"
	I0717 19:18:30.795827  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.404804223s)
	I0717 19:18:30.795848  596166 addons.go:475] Verifying addon metrics-server=true in "addons-747597"
	I0717 19:18:30.795892  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.132587676s)
	I0717 19:18:30.797806  596166 out.go:177] * Verifying ingress addon...
	I0717 19:18:30.799390  596166 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-747597 service yakd-dashboard -n yakd-dashboard
	
	I0717 19:18:30.799412  596166 out.go:177] * Verifying registry addon...
	I0717 19:18:30.801508  596166 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 19:18:30.802940  596166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 19:18:30.808388  596166 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 19:18:30.808416  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:30.811328  596166 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 19:18:30.811346  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:30.856011  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.027972986s)
	W0717 19:18:30.856050  596166 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 19:18:30.856071  596166 retry.go:31] will retry after 273.179894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 19:18:30.856101  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.896355285s)
	I0717 19:18:31.130256  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 19:18:31.142172  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.617867337s)
	I0717 19:18:31.142209  596166 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-747597"
	I0717 19:18:31.144576  596166 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 19:18:31.146922  596166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 19:18:31.170208  596166 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 19:18:31.170236  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:31.307319  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:31.310050  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:31.651097  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:31.805576  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:31.816302  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:31.823696  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:32.153987  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:32.305168  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:32.308651  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:32.651161  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:32.805922  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:32.808149  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:32.962328  596166 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 19:18:32.962455  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:32.991424  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:33.149731  596166 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 19:18:33.153483  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:33.176163  596166 addons.go:234] Setting addon gcp-auth=true in "addons-747597"
	I0717 19:18:33.176257  596166 host.go:66] Checking if "addons-747597" exists ...
	I0717 19:18:33.176727  596166 cli_runner.go:164] Run: docker container inspect addons-747597 --format={{.State.Status}}
	I0717 19:18:33.208652  596166 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 19:18:33.208713  596166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747597
	I0717 19:18:33.227339  596166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/addons-747597/id_rsa Username:docker}
	I0717 19:18:33.306984  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:33.309820  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:33.651219  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:33.809991  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:33.811401  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:33.829602  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:33.946116  596166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.815810535s)
	I0717 19:18:33.948805  596166 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 19:18:33.950746  596166 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 19:18:33.952953  596166 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 19:18:33.953019  596166 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 19:18:33.987943  596166 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 19:18:33.988016  596166 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 19:18:34.011142  596166 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 19:18:34.011218  596166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 19:18:34.036604  596166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 19:18:34.158145  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:34.305598  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:34.311312  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:34.657132  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:34.843261  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:34.854461  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:34.872688  596166 addons.go:475] Verifying addon gcp-auth=true in "addons-747597"
	I0717 19:18:34.874689  596166 out.go:177] * Verifying gcp-auth addon...
	I0717 19:18:34.877908  596166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 19:18:34.906030  596166 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 19:18:34.906097  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:35.153453  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:35.309214  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:35.310757  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:35.385210  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:35.651737  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:35.808644  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:35.810332  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:35.881862  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:36.151742  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:36.305968  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:36.307762  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:36.324579  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:36.381250  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:36.653345  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:36.812167  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:36.812837  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:36.883728  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:37.151929  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:37.313854  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:37.314855  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:37.382010  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:37.651476  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:37.805854  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:37.808156  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:37.881845  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:38.151070  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:38.308144  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:38.312868  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:38.381762  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:38.651293  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:38.806647  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:38.807575  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:38.824347  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:38.881161  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:39.150949  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:39.306958  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:39.310923  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:39.381844  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:39.651668  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:39.805890  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:39.807735  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:39.881989  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:40.151532  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:40.305331  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:40.307661  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:40.381914  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:40.651462  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:40.805823  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:40.808722  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:40.824601  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:40.881599  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:41.152055  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:41.305827  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:41.307933  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:41.381678  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:41.651686  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:41.805581  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:41.807717  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:41.882168  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:42.151907  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:42.305769  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:42.309383  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:42.381777  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:42.650973  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:42.806152  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:42.806479  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:42.881906  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:43.152003  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:43.306604  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:43.307730  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:43.324044  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:43.382700  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:43.652706  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:43.807553  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:43.808438  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:43.881942  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:44.150869  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:44.308112  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:44.308339  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:44.381910  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:44.651743  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:44.806068  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:44.806752  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:44.881707  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:45.153267  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:45.308026  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:45.308540  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:45.324959  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:45.381928  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:45.652014  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:45.806374  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:45.808510  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:45.881572  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:46.153864  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:46.307776  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:46.308816  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:46.381568  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:46.651902  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:46.806416  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:46.807084  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:46.881550  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:47.152096  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:47.307546  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:47.308244  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:47.325445  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:47.382575  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:47.652029  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:47.806453  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:47.808965  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:47.881524  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:48.152126  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:48.308513  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:48.309285  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:48.381826  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:48.652273  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:48.807335  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:48.808117  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:48.881853  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:49.152150  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:49.306828  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:49.308571  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:49.383107  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:49.651162  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:49.806751  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:49.808963  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:49.824074  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:49.881704  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:50.151359  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:50.306705  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:50.307760  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:50.381752  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:50.651619  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:50.807024  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:50.807449  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:50.882098  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:51.151250  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:51.305657  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:51.308847  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:51.381256  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:51.651480  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:51.806136  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:51.807202  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:51.824624  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:51.881772  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:52.151774  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:52.306360  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:52.307344  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:52.381749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:52.651935  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:52.805472  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:52.807723  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:52.881980  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:53.151292  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:53.305853  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:53.307401  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:53.381700  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:53.651753  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:53.806884  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:53.808011  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:53.889286  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:54.151976  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:54.313378  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:54.314032  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:54.331764  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:54.383098  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:54.654262  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:54.806665  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:54.808327  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:54.882581  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:55.151554  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:55.305817  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:55.315657  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:55.381699  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:55.651745  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:55.807453  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:55.807749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:55.881627  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:56.151663  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:56.306862  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:56.307842  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:56.381755  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:56.650910  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:56.805901  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:56.808027  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:56.825466  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:56.882089  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:57.150875  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:57.305187  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:57.308069  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:57.381468  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:57.652256  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:57.806833  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:57.807400  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:57.883074  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:58.152073  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:58.306166  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:58.308155  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:58.382027  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:58.651913  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:58.806042  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:58.807794  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:58.881973  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:59.151346  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:59.305221  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:59.307998  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:59.324003  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:18:59.381747  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:18:59.651620  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:18:59.806086  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:18:59.807734  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:18:59.881024  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:00.185974  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:00.315098  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:00.316770  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:00.382457  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:00.652109  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:00.805970  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:00.807534  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:00.881102  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:01.151765  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:01.307548  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:01.308147  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:01.324498  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:01.381758  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:01.651635  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:01.806635  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:01.807797  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:01.881253  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:02.151231  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:02.306158  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:02.308461  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:02.381888  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:02.651353  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:02.807051  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:02.807834  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:02.882233  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:03.155701  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:03.306867  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:03.307457  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:03.324734  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:03.381451  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:03.651600  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:03.806185  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:03.807213  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:03.881901  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:04.150845  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:04.306557  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:04.307248  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:04.382211  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:04.651901  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:04.806916  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:04.807647  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:04.881554  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:05.151320  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:05.306075  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:05.306839  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:05.381763  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:05.652015  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:05.806733  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:05.807311  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:05.824044  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:05.881977  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:06.152602  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:06.305586  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:06.306078  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:06.381886  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:06.651902  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:06.805699  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:06.809393  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:06.882120  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:07.151992  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:07.306583  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:07.307286  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:07.381476  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:07.651568  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:07.806261  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:07.807968  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:07.824812  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:07.882157  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:08.151405  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:08.305312  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:08.307691  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:08.381232  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:08.651209  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:08.806567  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:08.807262  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:08.881706  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:09.151567  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:09.306304  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:09.307034  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:09.381639  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:09.651215  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:09.805915  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:09.808275  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:09.825146  596166 node_ready.go:53] node "addons-747597" has status "Ready":"False"
	I0717 19:19:09.882100  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:10.183065  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:10.307399  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:10.309890  596166 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 19:19:10.309970  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:10.324200  596166 node_ready.go:49] node "addons-747597" has status "Ready":"True"
	I0717 19:19:10.324263  596166 node_ready.go:38] duration metric: took 43.003517192s for node "addons-747597" to be "Ready" ...
	I0717 19:19:10.324305  596166 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:19:10.356631  596166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vx2ls" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:10.389794  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:10.656099  596166 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 19:19:10.656133  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:10.808055  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:10.814348  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:10.882335  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:11.173896  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:11.308111  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:11.309427  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:11.407981  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:11.653558  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:11.809542  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:11.811022  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:11.863675  596166 pod_ready.go:92] pod "coredns-7db6d8ff4d-vx2ls" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.863708  596166 pod_ready.go:81] duration metric: took 1.506999835s for pod "coredns-7db6d8ff4d-vx2ls" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.863753  596166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.871274  596166 pod_ready.go:92] pod "etcd-addons-747597" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.871301  596166 pod_ready.go:81] duration metric: took 7.527058ms for pod "etcd-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.871316  596166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.876869  596166 pod_ready.go:92] pod "kube-apiserver-addons-747597" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.876893  596166 pod_ready.go:81] duration metric: took 5.5668ms for pod "kube-apiserver-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.876905  596166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.881695  596166 pod_ready.go:92] pod "kube-controller-manager-addons-747597" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.881720  596166 pod_ready.go:81] duration metric: took 4.80603ms for pod "kube-controller-manager-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.881733  596166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gcfj" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.882010  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:11.926098  596166 pod_ready.go:92] pod "kube-proxy-6gcfj" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:11.926126  596166 pod_ready.go:81] duration metric: took 44.38481ms for pod "kube-proxy-6gcfj" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:11.926138  596166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:12.154853  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:12.310754  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:12.312180  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:12.326469  596166 pod_ready.go:92] pod "kube-scheduler-addons-747597" in "kube-system" namespace has status "Ready":"True"
	I0717 19:19:12.326543  596166 pod_ready.go:81] duration metric: took 400.396085ms for pod "kube-scheduler-addons-747597" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:12.326570  596166 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace to be "Ready" ...
	I0717 19:19:12.382978  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:12.657154  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:12.806779  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:12.820825  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:12.881903  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:13.164496  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:13.306314  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:13.310845  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:13.383013  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:13.661806  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:13.809706  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:13.811158  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:13.882520  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:14.152781  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:14.309497  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:14.310397  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:14.334777  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:14.382626  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:14.653900  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:14.809844  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:14.811947  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:14.882285  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:15.154070  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:15.308150  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:15.309580  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:15.382201  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:15.653751  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:15.807737  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:15.809013  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:15.884007  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:16.153401  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:16.306037  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:16.310276  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:16.382095  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:16.653006  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:16.809577  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:16.810910  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:16.836016  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:16.882922  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:17.156236  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:17.309968  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:17.311356  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:17.381282  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:17.652405  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:17.807019  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:17.808332  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:17.881907  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:18.154264  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:18.308003  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:18.308965  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:18.381303  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:18.653490  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:18.808635  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:18.809373  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:18.882936  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:19.162608  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:19.310839  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:19.313093  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:19.349937  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:19.383547  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:19.654989  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:19.806827  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:19.811500  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:19.881633  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:20.154059  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:20.309932  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:20.311120  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:20.381469  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:20.652973  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:20.807879  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:20.814657  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:20.881629  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:21.152940  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:21.306344  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:21.307644  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:21.382664  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:21.653320  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:21.811355  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:21.813506  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:21.836486  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:21.882004  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:22.152728  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:22.306762  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:22.313865  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:22.382585  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:22.654573  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:22.809399  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:22.821305  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:22.882275  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:23.153964  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:23.307399  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:23.308709  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:23.382295  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:23.652954  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:23.806953  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:23.809218  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:23.837073  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:23.886298  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:24.153776  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:24.311267  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:24.312528  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:24.382106  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:24.652682  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:24.805710  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:24.808747  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:24.881749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:25.153011  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:25.307854  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:25.309018  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:25.382333  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:25.653015  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:25.807502  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:25.810351  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:25.882315  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:26.173681  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:26.306630  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:26.315510  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:26.333181  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:26.382497  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:26.653657  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:26.812943  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:26.819499  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:26.882605  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:27.154963  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:27.307116  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:27.311207  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:27.383553  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:27.654194  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:27.812514  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:27.812844  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:27.881905  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:28.153420  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:28.309064  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:28.311211  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:28.333287  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:28.381587  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:28.654336  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:28.820278  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:28.830222  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:28.883166  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:29.155039  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:29.311066  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:29.312027  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:29.381578  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:29.653513  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:29.805743  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:29.809677  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:29.881960  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:30.153670  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:30.305870  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:30.308915  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:30.381428  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:30.652707  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:30.808551  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:30.809309  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:30.842609  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:30.887149  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:31.152745  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:31.308087  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:31.313520  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:31.381966  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:31.671728  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:31.807773  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:31.812621  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:31.881635  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:32.153463  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:32.307758  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:32.310159  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:32.381450  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:32.653190  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:32.808017  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:32.811751  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:32.849134  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:32.885772  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:33.158697  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:33.312754  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:33.315420  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:33.383101  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:33.657191  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:33.815613  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:33.817592  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:33.882091  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:34.153595  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:34.306812  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:34.310635  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:34.383007  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:34.653466  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:34.810103  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 19:19:34.811241  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:34.885070  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:35.153263  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:35.310122  596166 kapi.go:107] duration metric: took 1m4.507180996s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 19:19:35.311781  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:35.337707  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:35.382214  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:35.653452  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:35.815772  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:35.882827  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:36.153270  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:36.306402  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:36.381785  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:36.653179  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:36.805742  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:36.881758  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:37.155072  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:37.306618  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:37.382418  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:37.654182  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:37.808033  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:37.834031  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:37.882662  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:38.153582  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:38.307213  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:38.382715  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:38.652761  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:38.807810  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:38.881841  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:39.153094  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:39.306349  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:39.381430  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:39.653298  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:39.805591  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:39.882045  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:40.153032  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:40.306007  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:40.332879  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:40.382112  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:40.653622  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:40.806975  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:40.882236  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:41.154009  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:41.306620  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:41.382412  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:41.657924  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:41.806804  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:41.892602  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:42.163353  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:42.308064  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:42.335644  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:42.382922  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:42.653935  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:42.806568  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:42.882199  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:43.153505  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:43.306811  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:43.381463  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:43.652349  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:43.806333  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:43.881382  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:44.153025  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:44.307331  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:44.382737  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:44.652474  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:44.806338  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:44.833482  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:44.882529  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:45.169905  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:45.315822  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:45.382675  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:45.652808  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:45.805938  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:45.882335  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:46.153223  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:46.306372  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:46.382265  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:46.652582  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:46.807586  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:46.835261  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:46.881822  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:47.154518  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:47.307251  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:47.381981  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:47.653379  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:47.806308  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:47.882318  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:48.152511  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:48.307118  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:48.381949  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:48.653037  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:48.806865  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:48.883276  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:49.152088  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:49.306972  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:49.333904  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:49.381449  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:49.652515  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:49.805847  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:49.882319  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:50.152458  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:50.306491  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:50.382336  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:50.653643  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:50.809402  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:50.881991  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:51.157787  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:51.306469  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:51.335065  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:51.383749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:51.654822  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:51.806532  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:51.881619  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:52.160237  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:52.306759  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:52.381884  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:52.653187  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:52.806547  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:52.882156  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:53.153362  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:53.305639  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:53.381520  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:53.655117  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:53.807254  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:53.834703  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:53.882298  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:54.153303  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:54.306036  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:54.381994  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:54.665001  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:54.806638  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:54.882230  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:55.156192  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:55.306504  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:55.381598  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:55.653430  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:55.807970  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:55.882402  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:56.156266  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:56.306462  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:56.337581  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:56.381658  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:56.653340  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:56.806389  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:56.882525  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:57.152862  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:57.306087  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:57.382028  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:57.654059  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:57.806929  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:57.881405  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:58.153680  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:58.306268  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:58.381980  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:58.655452  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:58.806712  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:58.833168  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:58.883564  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:59.154068  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:59.306585  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:59.382534  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:19:59.653043  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:19:59.807881  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:19:59.882446  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:00.213245  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:00.314393  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:00.438957  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:00.670828  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:00.807909  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:00.843623  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:00.911807  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:01.153733  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:01.306984  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:01.382865  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:01.653711  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:01.807397  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:01.882678  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:02.155443  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:02.307257  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:02.390766  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:02.654349  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:02.808461  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:02.884096  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:03.153592  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:03.305936  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:03.336214  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:03.381756  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:03.654714  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:03.806507  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:03.882657  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:04.153795  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:04.306762  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:04.385065  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:04.653872  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:04.806822  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:04.882153  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:05.155446  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:05.306162  596166 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 19:20:05.381965  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:05.655543  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:05.806720  596166 kapi.go:107] duration metric: took 1m35.005213294s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 19:20:05.833404  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:05.881832  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:06.153064  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:06.381429  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:06.654339  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:06.882586  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:07.155649  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:07.382706  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:07.653210  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:07.882113  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:08.155707  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:08.332982  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:08.381401  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:08.652807  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:08.881676  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:09.153617  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:09.382170  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:09.653648  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:09.882749  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:10.153215  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:10.337659  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:10.382942  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:10.652602  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:10.882153  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:11.152745  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:11.381721  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:11.657139  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:11.882033  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:12.153454  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:12.381926  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:12.652935  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:12.833334  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:12.881646  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:13.152452  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:13.381976  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:13.655784  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:13.881977  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:14.153320  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:14.381734  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:14.652094  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:14.881490  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:15.163134  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:15.333142  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:15.381501  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:15.653424  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:15.881821  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:16.155010  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:16.382060  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:16.655782  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:16.881942  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:17.153810  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:17.381841  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:17.653746  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 19:20:17.832570  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:17.881635  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:18.155558  596166 kapi.go:107] duration metric: took 1m47.008633567s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 19:20:18.382626  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:18.882264  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:19.381699  596166 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 19:20:19.832851  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:19.881282  596166 kapi.go:107] duration metric: took 1m45.003372639s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 19:20:19.894018  596166 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-747597 cluster.
	I0717 19:20:19.896084  596166 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 19:20:19.897790  596166 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 19:20:19.899801  596166 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, storage-provisioner-rancher, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0717 19:20:19.901709  596166 addons.go:510] duration metric: took 1m56.052205859s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner storage-provisioner-rancher nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0717 19:20:21.833250  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:24.333359  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:26.833059  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:29.332636  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:31.333398  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:33.333610  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:35.334910  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:37.833178  596166 pod_ready.go:102] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:38.332848  596166 pod_ready.go:92] pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:38.332875  596166 pod_ready.go:81] duration metric: took 1m26.006285543s for pod "metrics-server-c59844bb4-m2zcj" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:38.332888  596166 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-8tq66" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:38.338079  596166 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-8tq66" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:38.338105  596166 pod_ready.go:81] duration metric: took 5.208498ms for pod "nvidia-device-plugin-daemonset-8tq66" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:38.338125  596166 pod_ready.go:38] duration metric: took 1m28.013796055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:20:38.338140  596166 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:20:38.338885  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:20:38.338956  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:20:38.392473  596166 cri.go:89] found id: "e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:20:38.392512  596166 cri.go:89] found id: ""
	I0717 19:20:38.392522  596166 logs.go:276] 1 containers: [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957]
	I0717 19:20:38.392586  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.396885  596166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:20:38.396966  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:20:38.440953  596166 cri.go:89] found id: "aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:20:38.440973  596166 cri.go:89] found id: ""
	I0717 19:20:38.440980  596166 logs.go:276] 1 containers: [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18]
	I0717 19:20:38.441037  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.444468  596166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:20:38.444542  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:20:38.484525  596166 cri.go:89] found id: "6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:20:38.484546  596166 cri.go:89] found id: ""
	I0717 19:20:38.484554  596166 logs.go:276] 1 containers: [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38]
	I0717 19:20:38.484617  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.488077  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:20:38.488153  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:20:38.530668  596166 cri.go:89] found id: "498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:20:38.530692  596166 cri.go:89] found id: ""
	I0717 19:20:38.530700  596166 logs.go:276] 1 containers: [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481]
	I0717 19:20:38.530801  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.534518  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:20:38.534619  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:20:38.573602  596166 cri.go:89] found id: "61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:20:38.573622  596166 cri.go:89] found id: ""
	I0717 19:20:38.573630  596166 logs.go:276] 1 containers: [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc]
	I0717 19:20:38.573687  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.577044  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:20:38.577117  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:20:38.616783  596166 cri.go:89] found id: "4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:20:38.616803  596166 cri.go:89] found id: ""
	I0717 19:20:38.616811  596166 logs.go:276] 1 containers: [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00]
	I0717 19:20:38.616867  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.620301  596166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:20:38.620402  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:20:38.661177  596166 cri.go:89] found id: "b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:20:38.661200  596166 cri.go:89] found id: ""
	I0717 19:20:38.661208  596166 logs.go:276] 1 containers: [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163]
	I0717 19:20:38.661265  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:38.664587  596166 logs.go:123] Gathering logs for kube-scheduler [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481] ...
	I0717 19:20:38.664629  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:20:38.707813  596166 logs.go:123] Gathering logs for kube-proxy [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc] ...
	I0717 19:20:38.707841  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:20:38.746640  596166 logs.go:123] Gathering logs for kindnet [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163] ...
	I0717 19:20:38.746668  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:20:38.801090  596166 logs.go:123] Gathering logs for kubelet ...
	I0717 19:20:38.801119  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 19:20:38.856864  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:38.857108  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:38.857287  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:38.857480  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:20:38.890457  596166 logs.go:123] Gathering logs for etcd [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18] ...
	I0717 19:20:38.890492  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:20:38.938571  596166 logs.go:123] Gathering logs for coredns [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38] ...
	I0717 19:20:38.938606  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:20:38.986363  596166 logs.go:123] Gathering logs for kube-controller-manager [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00] ...
	I0717 19:20:38.986398  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:20:39.068362  596166 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:20:39.068405  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:20:39.161501  596166 logs.go:123] Gathering logs for container status ...
	I0717 19:20:39.161541  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:20:39.216659  596166 logs.go:123] Gathering logs for dmesg ...
	I0717 19:20:39.216689  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:20:39.237417  596166 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:20:39.237447  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:20:39.405954  596166 logs.go:123] Gathering logs for kube-apiserver [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957] ...
	I0717 19:20:39.405984  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:20:39.458628  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:20:39.458660  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 19:20:39.458708  596166 out.go:239] X Problems detected in kubelet:
	W0717 19:20:39.458720  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:39.458730  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:39.458742  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:39.458749  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:20:39.458758  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:20:39.458764  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:20:49.460164  596166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:49.473636  596166 api_server.go:72] duration metric: took 2m25.624488221s to wait for apiserver process to appear ...
	I0717 19:20:49.473664  596166 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:20:49.473696  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:20:49.473754  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:20:49.513244  596166 cri.go:89] found id: "e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:20:49.513264  596166 cri.go:89] found id: ""
	I0717 19:20:49.513272  596166 logs.go:276] 1 containers: [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957]
	I0717 19:20:49.513330  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.517172  596166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:20:49.517242  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:20:49.558151  596166 cri.go:89] found id: "aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:20:49.558183  596166 cri.go:89] found id: ""
	I0717 19:20:49.558193  596166 logs.go:276] 1 containers: [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18]
	I0717 19:20:49.558267  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.561725  596166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:20:49.561796  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:20:49.601996  596166 cri.go:89] found id: "6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:20:49.602019  596166 cri.go:89] found id: ""
	I0717 19:20:49.602026  596166 logs.go:276] 1 containers: [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38]
	I0717 19:20:49.602084  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.605540  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:20:49.605618  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:20:49.645276  596166 cri.go:89] found id: "498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:20:49.645299  596166 cri.go:89] found id: ""
	I0717 19:20:49.645307  596166 logs.go:276] 1 containers: [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481]
	I0717 19:20:49.645362  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.648759  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:20:49.648829  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:20:49.686778  596166 cri.go:89] found id: "61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:20:49.686798  596166 cri.go:89] found id: ""
	I0717 19:20:49.686807  596166 logs.go:276] 1 containers: [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc]
	I0717 19:20:49.686880  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.690464  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:20:49.690537  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:20:49.729136  596166 cri.go:89] found id: "4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:20:49.729169  596166 cri.go:89] found id: ""
	I0717 19:20:49.729178  596166 logs.go:276] 1 containers: [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00]
	I0717 19:20:49.729253  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.732947  596166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:20:49.733019  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:20:49.775402  596166 cri.go:89] found id: "b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:20:49.775427  596166 cri.go:89] found id: ""
	I0717 19:20:49.775435  596166 logs.go:276] 1 containers: [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163]
	I0717 19:20:49.775499  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:20:49.779243  596166 logs.go:123] Gathering logs for kube-scheduler [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481] ...
	I0717 19:20:49.779271  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:20:49.823283  596166 logs.go:123] Gathering logs for kube-proxy [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc] ...
	I0717 19:20:49.823312  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:20:49.867035  596166 logs.go:123] Gathering logs for kube-controller-manager [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00] ...
	I0717 19:20:49.867061  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:20:49.956705  596166 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:20:49.956742  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:20:50.067188  596166 logs.go:123] Gathering logs for dmesg ...
	I0717 19:20:50.067229  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:20:50.088589  596166 logs.go:123] Gathering logs for etcd [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18] ...
	I0717 19:20:50.088626  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:20:50.142767  596166 logs.go:123] Gathering logs for coredns [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38] ...
	I0717 19:20:50.142804  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:20:50.208211  596166 logs.go:123] Gathering logs for kindnet [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163] ...
	I0717 19:20:50.208242  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:20:50.259213  596166 logs.go:123] Gathering logs for container status ...
	I0717 19:20:50.259246  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:20:50.329925  596166 logs.go:123] Gathering logs for kubelet ...
	I0717 19:20:50.329953  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 19:20:50.378251  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.378471  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.378647  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.378838  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:20:50.413960  596166 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:20:50.413992  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:20:50.562856  596166 logs.go:123] Gathering logs for kube-apiserver [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957] ...
	I0717 19:20:50.562892  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:20:50.620147  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:20:50.620178  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 19:20:50.620227  596166 out.go:239] X Problems detected in kubelet:
	W0717 19:20:50.620239  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.620246  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.620260  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:20:50.620273  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:20:50.620285  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:20:50.620291  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:21:00.620932  596166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 19:21:00.657740  596166 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 19:21:00.660419  596166 api_server.go:141] control plane version: v1.30.2
	I0717 19:21:00.660443  596166 api_server.go:131] duration metric: took 11.186772098s to wait for apiserver health ...
	I0717 19:21:00.660453  596166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:21:00.660474  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:21:00.660536  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:21:00.714414  596166 cri.go:89] found id: "e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:21:00.714436  596166 cri.go:89] found id: ""
	I0717 19:21:00.714444  596166 logs.go:276] 1 containers: [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957]
	I0717 19:21:00.714501  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.718323  596166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:21:00.718398  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:21:00.763288  596166 cri.go:89] found id: "aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:21:00.763310  596166 cri.go:89] found id: ""
	I0717 19:21:00.763318  596166 logs.go:276] 1 containers: [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18]
	I0717 19:21:00.763391  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.767433  596166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:21:00.767497  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:21:00.806950  596166 cri.go:89] found id: "6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:21:00.806972  596166 cri.go:89] found id: ""
	I0717 19:21:00.806981  596166 logs.go:276] 1 containers: [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38]
	I0717 19:21:00.807038  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.810420  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:21:00.810508  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:21:00.853090  596166 cri.go:89] found id: "498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:21:00.853111  596166 cri.go:89] found id: ""
	I0717 19:21:00.853119  596166 logs.go:276] 1 containers: [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481]
	I0717 19:21:00.853196  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.856635  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:21:00.856716  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:21:00.897080  596166 cri.go:89] found id: "61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:21:00.897113  596166 cri.go:89] found id: ""
	I0717 19:21:00.897122  596166 logs.go:276] 1 containers: [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc]
	I0717 19:21:00.897209  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.900748  596166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:21:00.900871  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:21:00.938419  596166 cri.go:89] found id: "4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:21:00.938442  596166 cri.go:89] found id: ""
	I0717 19:21:00.938450  596166 logs.go:276] 1 containers: [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00]
	I0717 19:21:00.938526  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.942361  596166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:21:00.942462  596166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:21:00.983503  596166 cri.go:89] found id: "b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:21:00.983565  596166 cri.go:89] found id: ""
	I0717 19:21:00.983599  596166 logs.go:276] 1 containers: [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163]
	I0717 19:21:00.983671  596166 ssh_runner.go:195] Run: which crictl
	I0717 19:21:00.987253  596166 logs.go:123] Gathering logs for kindnet [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163] ...
	I0717 19:21:00.987278  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163"
	I0717 19:21:01.062314  596166 logs.go:123] Gathering logs for container status ...
	I0717 19:21:01.062346  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:21:01.112582  596166 logs.go:123] Gathering logs for dmesg ...
	I0717 19:21:01.112619  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:21:01.131715  596166 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:21:01.131747  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:21:01.277187  596166 logs.go:123] Gathering logs for kube-apiserver [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957] ...
	I0717 19:21:01.277215  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957"
	I0717 19:21:01.337345  596166 logs.go:123] Gathering logs for kube-scheduler [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481] ...
	I0717 19:21:01.337380  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481"
	I0717 19:21:01.377863  596166 logs.go:123] Gathering logs for kube-controller-manager [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00] ...
	I0717 19:21:01.377898  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00"
	I0717 19:21:01.447789  596166 logs.go:123] Gathering logs for kubelet ...
	I0717 19:21:01.447826  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 19:21:01.498689  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.498938  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.499121  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.499313  596166 logs.go:138] Found kubelet problem: Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:21:01.534308  596166 logs.go:123] Gathering logs for etcd [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18] ...
	I0717 19:21:01.534339  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18"
	I0717 19:21:01.583096  596166 logs.go:123] Gathering logs for coredns [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38] ...
	I0717 19:21:01.583131  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38"
	I0717 19:21:01.638408  596166 logs.go:123] Gathering logs for kube-proxy [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc] ...
	I0717 19:21:01.638445  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc"
	I0717 19:21:01.676654  596166 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:21:01.676682  596166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:21:01.771632  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:21:01.771664  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 19:21:01.771747  596166 out.go:239] X Problems detected in kubelet:
	W0717 19:21:01.771760  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.162674    1547 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.771786  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.162753    1547 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747597" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.771805  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: W0717 19:19:10.168008    1547 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	W0717 19:21:01.771818  596166 out.go:239]   Jul 17 19:19:10 addons-747597 kubelet[1547]: E0717 19:19:10.168064    1547 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-747597" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747597' and this object
	I0717 19:21:01.771824  596166 out.go:304] Setting ErrFile to fd 2...
	I0717 19:21:01.771830  596166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:21:11.784350  596166 system_pods.go:59] 18 kube-system pods found
	I0717 19:21:11.784389  596166 system_pods.go:61] "coredns-7db6d8ff4d-vx2ls" [082916ef-1119-4778-9742-38e8695b17eb] Running
	I0717 19:21:11.784399  596166 system_pods.go:61] "csi-hostpath-attacher-0" [1358f44c-4762-4923-af91-c24f5aac1261] Running
	I0717 19:21:11.784404  596166 system_pods.go:61] "csi-hostpath-resizer-0" [9ff03202-1a5b-4edd-8712-5fb2b57bc80d] Running
	I0717 19:21:11.784408  596166 system_pods.go:61] "csi-hostpathplugin-b2j8t" [46f4a30f-3aa2-4a55-93a0-d60b33eb8447] Running
	I0717 19:21:11.784412  596166 system_pods.go:61] "etcd-addons-747597" [604c419d-7405-426d-8546-7b8a298fd63f] Running
	I0717 19:21:11.784417  596166 system_pods.go:61] "kindnet-hr4v9" [249b1478-18aa-46b8-ac5c-c98c42238bcd] Running
	I0717 19:21:11.784421  596166 system_pods.go:61] "kube-apiserver-addons-747597" [9cdb0970-bdae-46ff-835b-309056cdb2f3] Running
	I0717 19:21:11.784426  596166 system_pods.go:61] "kube-controller-manager-addons-747597" [72516744-2858-4838-a858-6f42cefe9915] Running
	I0717 19:21:11.784430  596166 system_pods.go:61] "kube-ingress-dns-minikube" [21c73a81-efe6-4fc7-b825-b2655ceeaab5] Running
	I0717 19:21:11.784444  596166 system_pods.go:61] "kube-proxy-6gcfj" [ad90d9f5-2b4a-49c6-b1e8-b3dd0668fa24] Running
	I0717 19:21:11.784452  596166 system_pods.go:61] "kube-scheduler-addons-747597" [4224e17d-41c4-4b65-967d-19655bbedcfa] Running
	I0717 19:21:11.784456  596166 system_pods.go:61] "metrics-server-c59844bb4-m2zcj" [ecfedd7e-e869-4dd1-b482-62f0706cc601] Running
	I0717 19:21:11.784460  596166 system_pods.go:61] "nvidia-device-plugin-daemonset-8tq66" [e1a33d1c-572f-4efa-b24a-abffc419c427] Running
	I0717 19:21:11.784464  596166 system_pods.go:61] "registry-656c9c8d9c-4kkkf" [9820910e-bb3a-48fe-b2d1-5c69c2b66429] Running
	I0717 19:21:11.784470  596166 system_pods.go:61] "registry-proxy-qczlm" [dc1faa8a-6f1b-41a9-b047-b18156274ad5] Running
	I0717 19:21:11.784475  596166 system_pods.go:61] "snapshot-controller-745499f584-f69f7" [a944a321-09b4-4286-9302-a0657345e9b7] Running
	I0717 19:21:11.784482  596166 system_pods.go:61] "snapshot-controller-745499f584-tbjqv" [12edb1ce-6753-4e89-a1b4-f6bbfad2d478] Running
	I0717 19:21:11.784486  596166 system_pods.go:61] "storage-provisioner" [3d085cc1-2744-4f4a-a266-eb70ec60d46a] Running
	I0717 19:21:11.784492  596166 system_pods.go:74] duration metric: took 11.124033696s to wait for pod list to return data ...
	I0717 19:21:11.784504  596166 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:21:11.786738  596166 default_sa.go:45] found service account: "default"
	I0717 19:21:11.786762  596166 default_sa.go:55] duration metric: took 2.252467ms for default service account to be created ...
	I0717 19:21:11.786772  596166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:21:11.796544  596166 system_pods.go:86] 18 kube-system pods found
	I0717 19:21:11.796644  596166 system_pods.go:89] "coredns-7db6d8ff4d-vx2ls" [082916ef-1119-4778-9742-38e8695b17eb] Running
	I0717 19:21:11.796666  596166 system_pods.go:89] "csi-hostpath-attacher-0" [1358f44c-4762-4923-af91-c24f5aac1261] Running
	I0717 19:21:11.796684  596166 system_pods.go:89] "csi-hostpath-resizer-0" [9ff03202-1a5b-4edd-8712-5fb2b57bc80d] Running
	I0717 19:21:11.796715  596166 system_pods.go:89] "csi-hostpathplugin-b2j8t" [46f4a30f-3aa2-4a55-93a0-d60b33eb8447] Running
	I0717 19:21:11.796799  596166 system_pods.go:89] "etcd-addons-747597" [604c419d-7405-426d-8546-7b8a298fd63f] Running
	I0717 19:21:11.796823  596166 system_pods.go:89] "kindnet-hr4v9" [249b1478-18aa-46b8-ac5c-c98c42238bcd] Running
	I0717 19:21:11.796842  596166 system_pods.go:89] "kube-apiserver-addons-747597" [9cdb0970-bdae-46ff-835b-309056cdb2f3] Running
	I0717 19:21:11.796856  596166 system_pods.go:89] "kube-controller-manager-addons-747597" [72516744-2858-4838-a858-6f42cefe9915] Running
	I0717 19:21:11.796862  596166 system_pods.go:89] "kube-ingress-dns-minikube" [21c73a81-efe6-4fc7-b825-b2655ceeaab5] Running
	I0717 19:21:11.796869  596166 system_pods.go:89] "kube-proxy-6gcfj" [ad90d9f5-2b4a-49c6-b1e8-b3dd0668fa24] Running
	I0717 19:21:11.796874  596166 system_pods.go:89] "kube-scheduler-addons-747597" [4224e17d-41c4-4b65-967d-19655bbedcfa] Running
	I0717 19:21:11.796882  596166 system_pods.go:89] "metrics-server-c59844bb4-m2zcj" [ecfedd7e-e869-4dd1-b482-62f0706cc601] Running
	I0717 19:21:11.796886  596166 system_pods.go:89] "nvidia-device-plugin-daemonset-8tq66" [e1a33d1c-572f-4efa-b24a-abffc419c427] Running
	I0717 19:21:11.796890  596166 system_pods.go:89] "registry-656c9c8d9c-4kkkf" [9820910e-bb3a-48fe-b2d1-5c69c2b66429] Running
	I0717 19:21:11.796896  596166 system_pods.go:89] "registry-proxy-qczlm" [dc1faa8a-6f1b-41a9-b047-b18156274ad5] Running
	I0717 19:21:11.796903  596166 system_pods.go:89] "snapshot-controller-745499f584-f69f7" [a944a321-09b4-4286-9302-a0657345e9b7] Running
	I0717 19:21:11.796932  596166 system_pods.go:89] "snapshot-controller-745499f584-tbjqv" [12edb1ce-6753-4e89-a1b4-f6bbfad2d478] Running
	I0717 19:21:11.796942  596166 system_pods.go:89] "storage-provisioner" [3d085cc1-2744-4f4a-a266-eb70ec60d46a] Running
	I0717 19:21:11.796950  596166 system_pods.go:126] duration metric: took 10.172942ms to wait for k8s-apps to be running ...
	I0717 19:21:11.796962  596166 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:21:11.797031  596166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:21:11.808603  596166 system_svc.go:56] duration metric: took 11.631622ms WaitForService to wait for kubelet
	I0717 19:21:11.808633  596166 kubeadm.go:582] duration metric: took 2m47.959489442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:21:11.808654  596166 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:21:11.812662  596166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 19:21:11.812697  596166 node_conditions.go:123] node cpu capacity is 2
	I0717 19:21:11.812709  596166 node_conditions.go:105] duration metric: took 4.049739ms to run NodePressure ...
	I0717 19:21:11.812731  596166 start.go:241] waiting for startup goroutines ...
	I0717 19:21:11.812740  596166 start.go:246] waiting for cluster config update ...
	I0717 19:21:11.812758  596166 start.go:255] writing updated cluster config ...
	I0717 19:21:11.813047  596166 ssh_runner.go:195] Run: rm -f paused
	I0717 19:21:12.165166  596166 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:21:12.169124  596166 out.go:177] * Done! kubectl is now configured to use "addons-747597" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.484982026Z" level=info msg="Removed container 2c5f0c15cf3016ec51d39a52aea1710ef0c24c9d8bda91f6563ce95ee554a9fd: ingress-nginx/ingress-nginx-admission-patch-4t94j/patch" id=1ce68415-e0eb-4c7b-8aed-733dcca20b64 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.486480705Z" level=info msg="Removing container: 799d68539952bb580ed18a368930a6d91c2292838b9abcf1e8db0145df490e03" id=01fd9a1c-4622-4e70-b99d-6d59631c7654 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.503257434Z" level=info msg="Removed container 799d68539952bb580ed18a368930a6d91c2292838b9abcf1e8db0145df490e03: ingress-nginx/ingress-nginx-admission-create-m94z8/create" id=01fd9a1c-4622-4e70-b99d-6d59631c7654 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.505224321Z" level=info msg="Stopping pod sandbox: 37a5a3ad7daa614c80f92cbb4f2d5bb605b0ab8be343acaccf03dac0f9a62db9" id=f16f5468-2bc9-4d71-89d1-775619bc763c name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.505271369Z" level=info msg="Stopped pod sandbox (already stopped): 37a5a3ad7daa614c80f92cbb4f2d5bb605b0ab8be343acaccf03dac0f9a62db9" id=f16f5468-2bc9-4d71-89d1-775619bc763c name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.505722527Z" level=info msg="Removing pod sandbox: 37a5a3ad7daa614c80f92cbb4f2d5bb605b0ab8be343acaccf03dac0f9a62db9" id=e25a96d4-c8f0-4c33-9732-afad77f68768 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.514947250Z" level=info msg="Removed pod sandbox: 37a5a3ad7daa614c80f92cbb4f2d5bb605b0ab8be343acaccf03dac0f9a62db9" id=e25a96d4-c8f0-4c33-9732-afad77f68768 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.515614252Z" level=info msg="Stopping pod sandbox: fd7803a66233a7690114fa4653765dfc58aec598b8fe080ed161c53308bdaf31" id=b780c37c-193e-410d-9f5a-590b237672d3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.515652586Z" level=info msg="Stopped pod sandbox (already stopped): fd7803a66233a7690114fa4653765dfc58aec598b8fe080ed161c53308bdaf31" id=b780c37c-193e-410d-9f5a-590b237672d3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.516008925Z" level=info msg="Removing pod sandbox: fd7803a66233a7690114fa4653765dfc58aec598b8fe080ed161c53308bdaf31" id=d9b4f181-140a-4e5c-96cf-3f35aa398498 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.524394283Z" level=info msg="Removed pod sandbox: fd7803a66233a7690114fa4653765dfc58aec598b8fe080ed161c53308bdaf31" id=d9b4f181-140a-4e5c-96cf-3f35aa398498 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.525084915Z" level=info msg="Stopping pod sandbox: 30f06f9260db2e618936862cfaaa570f7cc5ba511b6dc1dffffb155e12a28636" id=29174de7-3970-47ed-a375-37c04c4719e3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.525127327Z" level=info msg="Stopped pod sandbox (already stopped): 30f06f9260db2e618936862cfaaa570f7cc5ba511b6dc1dffffb155e12a28636" id=29174de7-3970-47ed-a375-37c04c4719e3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.525445267Z" level=info msg="Removing pod sandbox: 30f06f9260db2e618936862cfaaa570f7cc5ba511b6dc1dffffb155e12a28636" id=a6724bf8-0fa2-45f4-ae9c-966d0e4f991d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.534332153Z" level=info msg="Removed pod sandbox: 30f06f9260db2e618936862cfaaa570f7cc5ba511b6dc1dffffb155e12a28636" id=a6724bf8-0fa2-45f4-ae9c-966d0e4f991d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.534930674Z" level=info msg="Stopping pod sandbox: c5711d02b1b22c642cb69dc850ad90867d6391b4dcee92ceccf629221fb9e433" id=026a1a3d-cdf8-4034-a85b-b8fe5f025ae4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.534977747Z" level=info msg="Stopped pod sandbox (already stopped): c5711d02b1b22c642cb69dc850ad90867d6391b4dcee92ceccf629221fb9e433" id=026a1a3d-cdf8-4034-a85b-b8fe5f025ae4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.535687489Z" level=info msg="Removing pod sandbox: c5711d02b1b22c642cb69dc850ad90867d6391b4dcee92ceccf629221fb9e433" id=6811a7fd-a59e-40b9-957a-33e59201d0b7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 19:25:10 addons-747597 crio[966]: time="2024-07-17 19:25:10.548657162Z" level=info msg="Removed pod sandbox: c5711d02b1b22c642cb69dc850ad90867d6391b4dcee92ceccf629221fb9e433" id=6811a7fd-a59e-40b9-957a-33e59201d0b7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 19:27:47 addons-747597 crio[966]: time="2024-07-17 19:27:47.256602753Z" level=info msg="Stopping container: 415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0 (timeout: 30s)" id=0dd4f27e-5a63-4b7a-a9fb-797ff2ede1d2 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 19:27:48 addons-747597 crio[966]: time="2024-07-17 19:27:48.431797240Z" level=info msg="Stopped container 415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0: kube-system/metrics-server-c59844bb4-m2zcj/metrics-server" id=0dd4f27e-5a63-4b7a-a9fb-797ff2ede1d2 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 19:27:48 addons-747597 crio[966]: time="2024-07-17 19:27:48.432625685Z" level=info msg="Stopping pod sandbox: 7ca326c992ed6d012862d871497931e0900449efcc6058ab6a9f9655d766b4e1" id=c7010348-38c6-4db7-8e8f-1cd0c80e55dd name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 19:27:48 addons-747597 crio[966]: time="2024-07-17 19:27:48.432859178Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-m2zcj Namespace:kube-system ID:7ca326c992ed6d012862d871497931e0900449efcc6058ab6a9f9655d766b4e1 UID:ecfedd7e-e869-4dd1-b482-62f0706cc601 NetNS:/var/run/netns/00949ceb-9a21-4cde-a87b-478b1120fa62 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 19:27:48 addons-747597 crio[966]: time="2024-07-17 19:27:48.433005040Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-m2zcj from CNI network \"kindnet\" (type=ptp)"
	Jul 17 19:27:48 addons-747597 crio[966]: time="2024-07-17 19:27:48.491069654Z" level=info msg="Stopped pod sandbox: 7ca326c992ed6d012862d871497931e0900449efcc6058ab6a9f9655d766b4e1" id=c7010348-38c6-4db7-8e8f-1cd0c80e55dd name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f03a99eddc5ee       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   2b0240e0948ce       hello-world-app-6778b5fc9f-9s966
	a9ce00812b756       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   55085fd909ea2       nginx
	167447215a84b       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   6 minutes ago       Running             headlamp                  0                   e0e0d584ddf79       headlamp-7867546754-g6rr2
	78c53578da440       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            7 minutes ago       Running             gcp-auth                  0                   3d73319215130       gcp-auth-5db96cd9b4-twc52
	769fb0f4d544f       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                         7 minutes ago       Running             yakd                      0                   04e3e03489361       yakd-dashboard-799879c74f-ftstw
	415ce64e87ebf       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   7ca326c992ed6       metrics-server-c59844bb4-m2zcj
	ba3ec42298409       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   d7bf4964831cc       storage-provisioner
	6b259081db958       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   9a46fd8b7c114       coredns-7db6d8ff4d-vx2ls
	b1015172052bc       docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493                      9 minutes ago       Running             kindnet-cni               0                   a618100571d9f       kindnet-hr4v9
	61ff260c86790       66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae                                                        9 minutes ago       Running             kube-proxy                0                   043e01218f51d       kube-proxy-6gcfj
	e41f5b0b2a396       84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0                                                        9 minutes ago       Running             kube-apiserver            0                   0c6ee66dc17d0       kube-apiserver-addons-747597
	498353d1326cf       c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5                                                        9 minutes ago       Running             kube-scheduler            0                   3f1f5a0bc5736       kube-scheduler-addons-747597
	4b65ebb30b9af       e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567                                                        9 minutes ago       Running             kube-controller-manager   0                   ba3f2aae5569f       kube-controller-manager-addons-747597
	aafaeaa9e53bf       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago       Running             etcd                      0                   ccb207d7f192e       etcd-addons-747597
	
	
	==> coredns [6b259081db958f61eebe0c278ba3a6da161ac686f139c51eb2455c0858deee38] <==
	[INFO] 10.244.0.3:50454 - 20574 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001673695s
	[INFO] 10.244.0.3:36762 - 59039 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119425s
	[INFO] 10.244.0.3:36762 - 56985 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041953s
	[INFO] 10.244.0.3:41859 - 40266 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160016s
	[INFO] 10.244.0.3:41859 - 59478 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177271s
	[INFO] 10.244.0.3:40088 - 17247 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070818s
	[INFO] 10.244.0.3:40088 - 42048 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054646s
	[INFO] 10.244.0.3:33440 - 18761 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108882s
	[INFO] 10.244.0.3:33440 - 26187 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00013083s
	[INFO] 10.244.0.3:46143 - 33670 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002220607s
	[INFO] 10.244.0.3:46143 - 14724 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002434572s
	[INFO] 10.244.0.3:43331 - 27145 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102416s
	[INFO] 10.244.0.3:43331 - 45579 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000065583s
	[INFO] 10.244.0.20:55824 - 63825 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001411271s
	[INFO] 10.244.0.20:45109 - 8791 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001537949s
	[INFO] 10.244.0.20:56733 - 22193 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142843s
	[INFO] 10.244.0.20:50646 - 47549 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096394s
	[INFO] 10.244.0.20:58217 - 64747 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099101s
	[INFO] 10.244.0.20:46105 - 34875 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103819s
	[INFO] 10.244.0.20:36242 - 4054 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002926296s
	[INFO] 10.244.0.20:60183 - 37689 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002563531s
	[INFO] 10.244.0.20:54941 - 64368 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000828183s
	[INFO] 10.244.0.20:54211 - 15697 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001061593s
	[INFO] 10.244.0.22:35302 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196291s
	[INFO] 10.244.0.22:42006 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129124s
	
	
	==> describe nodes <==
	Name:               addons-747597
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-747597
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=addons-747597
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T19_18_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-747597
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-747597
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:27:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:25:18 +0000   Wed, 17 Jul 2024 19:18:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:25:18 +0000   Wed, 17 Jul 2024 19:18:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:25:18 +0000   Wed, 17 Jul 2024 19:18:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:25:18 +0000   Wed, 17 Jul 2024 19:19:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-747597
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cb652b4b04d43dbb605d68e346e8a8e
	  System UUID:                242ae9c1-ad18-41b5-803f-f2a7108e3122
	  Boot ID:                    69f17618-36a4-458d-bf7b-8c41eea0ca4f
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-9s966         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  gcp-auth                    gcp-auth-5db96cd9b4-twc52                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  headlamp                    headlamp-7867546754-g6rr2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 coredns-7db6d8ff4d-vx2ls                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m25s
	  kube-system                 etcd-addons-747597                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m38s
	  kube-system                 kindnet-hr4v9                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m25s
	  kube-system                 kube-apiserver-addons-747597             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m40s
	  kube-system                 kube-controller-manager-addons-747597    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-proxy-6gcfj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 kube-scheduler-addons-747597             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  yakd-dashboard              yakd-dashboard-799879c74f-ftstw          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     9m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m46s (x8 over 9m46s)  kubelet          Node addons-747597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m46s (x8 over 9m46s)  kubelet          Node addons-747597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m46s (x8 over 9m46s)  kubelet          Node addons-747597 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m38s                  kubelet          Node addons-747597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m38s                  kubelet          Node addons-747597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m38s                  kubelet          Node addons-747597 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m25s                  node-controller  Node addons-747597 event: Registered Node addons-747597 in Controller
	  Normal  NodeReady                8m38s                  kubelet          Node addons-747597 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001089] FS-Cache: O-key=[8] 'e23a5c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000ae89f91f{9p.inode} n=000000008afc45ed
	[  +0.001075] FS-Cache: N-key=[8] 'e23a5c0100000000'
	[  +0.002289] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000967] FS-Cache: O-cookie d=00000000ae89f91f{9p.inode} n=00000000d550a1d6
	[  +0.001111] FS-Cache: O-key=[8] 'e23a5c0100000000'
	[  +0.000746] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000ae89f91f{9p.inode} n=000000000f99c7cd
	[  +0.001063] FS-Cache: N-key=[8] 'e23a5c0100000000'
	[  +2.685345] FS-Cache: Duplicate cookie detected
	[  +0.000810] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000ae89f91f{9p.inode} n=000000007d1888d1
	[  +0.001092] FS-Cache: O-key=[8] 'e13a5c0100000000'
	[  +0.000723] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=00000000ae89f91f{9p.inode} n=0000000078612e88
	[  +0.001083] FS-Cache: N-key=[8] 'e13a5c0100000000'
	[  +0.415518] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001073] FS-Cache: O-cookie d=00000000ae89f91f{9p.inode} n=00000000e9178742
	[  +0.001155] FS-Cache: O-key=[8] 'e73a5c0100000000'
	[  +0.000755] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000992] FS-Cache: N-cookie d=00000000ae89f91f{9p.inode} n=0000000087fd2114
	[  +0.001094] FS-Cache: N-key=[8] 'e73a5c0100000000'
	
	
	==> etcd [aafaeaa9e53bf90f9b335a83622f592e1de646de4abc9191babd3f90d7ecaf18] <==
	{"level":"info","ts":"2024-07-17T19:18:04.277164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-17T19:18:04.277557Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:18:04.277636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:18:04.278105Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:18:04.291704Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-17T19:18:24.233608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.290013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-hr4v9\" ","response":"range_response_count:1 size:4910"}
	{"level":"info","ts":"2024-07-17T19:18:24.233751Z","caller":"traceutil/trace.go:171","msg":"trace[805563241] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-hr4v9; range_end:; response_count:1; response_revision:387; }","duration":"127.59375ms","start":"2024-07-17T19:18:24.106144Z","end":"2024-07-17T19:18:24.233738Z","steps":["trace[805563241] 'range keys from in-memory index tree'  (duration: 126.450047ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:26.992139Z","caller":"traceutil/trace.go:171","msg":"trace[1646014723] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"184.323609ms","start":"2024-07-17T19:18:26.807799Z","end":"2024-07-17T19:18:26.992123Z","steps":["trace[1646014723] 'process raft request'  (duration: 184.228027ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:27.02561Z","caller":"traceutil/trace.go:171","msg":"trace[1511432462] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"177.950155ms","start":"2024-07-17T19:18:26.847644Z","end":"2024-07-17T19:18:27.025594Z","steps":["trace[1511432462] 'process raft request'  (duration: 177.637524ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:27.212832Z","caller":"traceutil/trace.go:171","msg":"trace[2112153299] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"116.687553ms","start":"2024-07-17T19:18:27.096135Z","end":"2024-07-17T19:18:27.212822Z","steps":["trace[2112153299] 'process raft request'  (duration: 116.423094ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:27.212666Z","caller":"traceutil/trace.go:171","msg":"trace[1858015837] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"116.030774ms","start":"2024-07-17T19:18:27.09662Z","end":"2024-07-17T19:18:27.212651Z","steps":["trace[1858015837] 'read index received'  (duration: 115.916534ms)","trace[1858015837] 'applied index is now lower than readState.Index'  (duration: 113.69µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:18:27.322709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.939644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-17T19:18:27.322861Z","caller":"traceutil/trace.go:171","msg":"trace[1256833354] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:407; }","duration":"226.227253ms","start":"2024-07-17T19:18:27.096598Z","end":"2024-07-17T19:18:27.322825Z","steps":["trace[1256833354] 'agreement among raft nodes before linearized reading'  (duration: 163.324441ms)","trace[1256833354] 'range keys from in-memory index tree'  (duration: 32.553837ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T19:18:28.652879Z","caller":"traceutil/trace.go:171","msg":"trace[1352936546] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:454; }","duration":"130.455561ms","start":"2024-07-17T19:18:28.522407Z","end":"2024-07-17T19:18:28.652862Z","steps":["trace[1352936546] 'read index received'  (duration: 22.572793ms)","trace[1352936546] 'applied index is now lower than readState.Index'  (duration: 107.882062ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:18:28.654458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.692996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-747597\" ","response":"range_response_count:1 size:5744"}
	{"level":"info","ts":"2024-07-17T19:18:28.70779Z","caller":"traceutil/trace.go:171","msg":"trace[1334954012] range","detail":"{range_begin:/registry/minions/addons-747597; range_end:; response_count:1; response_revision:444; }","duration":"250.679271ms","start":"2024-07-17T19:18:28.457089Z","end":"2024-07-17T19:18:28.707768Z","steps":["trace[1334954012] 'agreement among raft nodes before linearized reading'  (duration: 196.577674ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:28.665065Z","caller":"traceutil/trace.go:171","msg":"trace[64051045] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"149.922906ms","start":"2024-07-17T19:18:28.515106Z","end":"2024-07-17T19:18:28.665029Z","steps":["trace[64051045] 'process raft request'  (duration: 137.532585ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:28.66518Z","caller":"traceutil/trace.go:171","msg":"trace[1437021553] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"127.006434ms","start":"2024-07-17T19:18:28.538165Z","end":"2024-07-17T19:18:28.665171Z","steps":["trace[1437021553] 'process raft request'  (duration: 114.626107ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:18:28.665258Z","caller":"traceutil/trace.go:171","msg":"trace[1043602737] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"127.016396ms","start":"2024-07-17T19:18:28.538231Z","end":"2024-07-17T19:18:28.665247Z","steps":["trace[1043602737] 'process raft request'  (duration: 114.596849ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:18:28.665355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.438116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-07-17T19:18:28.708447Z","caller":"traceutil/trace.go:171","msg":"trace[430166174] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:444; }","duration":"196.54185ms","start":"2024-07-17T19:18:28.511896Z","end":"2024-07-17T19:18:28.708438Z","steps":["trace[430166174] 'agreement among raft nodes before linearized reading'  (duration: 153.375929ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:18:28.665394Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.843717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:18:28.708695Z","caller":"traceutil/trace.go:171","msg":"trace[700972935] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:444; }","duration":"226.14265ms","start":"2024-07-17T19:18:28.482544Z","end":"2024-07-17T19:18:28.708687Z","steps":["trace[700972935] 'agreement among raft nodes before linearized reading'  (duration: 182.830383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:18:28.68554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.398973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:18:28.708913Z","caller":"traceutil/trace.go:171","msg":"trace[2028799458] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:452; }","duration":"170.780263ms","start":"2024-07-17T19:18:28.538122Z","end":"2024-07-17T19:18:28.708902Z","steps":["trace[2028799458] 'agreement among raft nodes before linearized reading'  (duration: 147.387304ms)"],"step_count":1}
	
	
	==> gcp-auth [78c53578da4401fc6cac8200a6235fe592d2ca4aa09fe9241ad3608e21567215] <==
	2024/07/17 19:20:19 GCP Auth Webhook started!
	2024/07/17 19:21:13 Ready to marshal response ...
	2024/07/17 19:21:13 Ready to write response ...
	2024/07/17 19:21:13 Ready to marshal response ...
	2024/07/17 19:21:13 Ready to write response ...
	2024/07/17 19:21:13 Ready to marshal response ...
	2024/07/17 19:21:13 Ready to write response ...
	2024/07/17 19:21:23 Ready to marshal response ...
	2024/07/17 19:21:23 Ready to write response ...
	2024/07/17 19:21:30 Ready to marshal response ...
	2024/07/17 19:21:30 Ready to write response ...
	2024/07/17 19:21:30 Ready to marshal response ...
	2024/07/17 19:21:30 Ready to write response ...
	2024/07/17 19:21:40 Ready to marshal response ...
	2024/07/17 19:21:40 Ready to write response ...
	2024/07/17 19:21:45 Ready to marshal response ...
	2024/07/17 19:21:45 Ready to write response ...
	2024/07/17 19:22:13 Ready to marshal response ...
	2024/07/17 19:22:13 Ready to write response ...
	2024/07/17 19:22:36 Ready to marshal response ...
	2024/07/17 19:22:36 Ready to write response ...
	2024/07/17 19:24:55 Ready to marshal response ...
	2024/07/17 19:24:55 Ready to write response ...
	
	
	==> kernel <==
	 19:27:48 up  3:10,  0 users,  load average: 0.11, 0.60, 1.58
	Linux addons-747597 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b1015172052bcf98ea27ec1ef3dd610546fd274ba553083439f5263b1f35f163] <==
	I0717 19:26:29.748600       1 main.go:303] handling current node
	I0717 19:26:39.748380       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:26:39.748424       1 main.go:303] handling current node
	I0717 19:26:49.749101       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:26:49.749210       1 main.go:303] handling current node
	W0717 19:26:52.156236       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 19:26:52.156272       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 19:26:56.892080       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:26:56.892188       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0717 19:26:59.749086       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:26:59.749119       1 main.go:303] handling current node
	W0717 19:27:02.856122       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 19:27:02.856163       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0717 19:27:09.749087       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:27:09.749122       1 main.go:303] handling current node
	I0717 19:27:19.749356       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:27:19.749387       1 main.go:303] handling current node
	I0717 19:27:29.749096       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:27:29.749128       1 main.go:303] handling current node
	I0717 19:27:39.748387       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 19:27:39.748420       1 main.go:303] handling current node
	W0717 19:27:40.201431       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:27:40.201497       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0717 19:27:47.525533       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 19:27:47.525665       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [e41f5b0b2a396697cf12986611f10b81a0910688d0451ede3400df88b82fd957] <==
	I0717 19:21:13.251865       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.59.34"}
	E0717 19:21:41.418310       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 19:21:41.429499       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 19:21:41.442140       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 19:21:56.441181       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 19:21:58.932897       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0717 19:22:01.266943       1 watch.go:250] http2: stream closed
	I0717 19:22:29.947016       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:29.947071       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.002239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:30.002299       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.022272       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:30.022403       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.064335       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:30.066234       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.122675       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 19:22:30.127501       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 19:22:30.734774       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 19:22:31.022639       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 19:22:31.194679       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 19:22:31.273735       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0717 19:22:31.763456       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 19:22:36.317670       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 19:22:36.651441       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.125.8"}
	I0717 19:24:55.917985       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.147.0"}
	
	
	==> kube-controller-manager [4b65ebb30b9afb972fa7199e503c632d5894024afc6ece91ff3284ab9ff27b00] <==
	W0717 19:25:49.536127       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:25:49.536166       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:26:02.913914       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:26:02.913951       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:26:06.680107       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:26:06.680146       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:26:13.612363       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:26:13.612401       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:26:21.996278       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:26:21.996318       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:26:38.894663       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:26:38.894700       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:26:52.480577       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:26:52.480614       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:27:12.154583       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:27:12.154622       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:27:17.308420       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:27:17.308456       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:27:19.966129       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:27:19.966168       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 19:27:45.230453       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:27:45.230509       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 19:27:47.227035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="6.072µs"
	W0717 19:27:48.946735       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 19:27:48.946794       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [61ff260c8679012b5d9ecad3421fc88e876ee006ec3796ea5da09f160623f4bc] <==
	I0717 19:18:30.095480       1 server_linux.go:69] "Using iptables proxy"
	I0717 19:18:30.266721       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 19:18:30.440452       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 19:18:30.440510       1 server_linux.go:165] "Using iptables Proxier"
	I0717 19:18:30.483145       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 19:18:30.483174       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 19:18:30.483198       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:18:30.483480       1 server.go:872] "Version info" version="v1.30.2"
	I0717 19:18:30.483503       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:18:30.485152       1 config.go:319] "Starting node config controller"
	I0717 19:18:30.485271       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:18:30.485609       1 config.go:101] "Starting endpoint slice config controller"
	I0717 19:18:30.486250       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:18:30.486393       1 config.go:192] "Starting service config controller"
	I0717 19:18:30.486427       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:18:30.588780       1 shared_informer.go:320] Caches are synced for service config
	I0717 19:18:30.588870       1 shared_informer.go:320] Caches are synced for node config
	I0717 19:18:30.588901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [498353d1326cfa644558ec4e603149c6bf351cbdaec747a97838b17e2d1b4481] <==
	I0717 19:18:08.179441       1 serving.go:380] Generated self-signed cert in-memory
	I0717 19:18:09.370286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 19:18:09.370453       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:18:09.378422       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 19:18:09.378516       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0717 19:18:09.378525       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0717 19:18:09.378546       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:18:09.379874       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 19:18:09.379963       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 19:18:09.387435       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:18:09.387479       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:18:09.479037       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0717 19:18:09.484710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 19:18:09.487936       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:25:00 addons-747597 kubelet[1547]: I0717 19:25:00.126512    1547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135dfbc7-1f33-4ce3-80cc-36e5afe0c11f" path="/var/lib/kubelet/pods/135dfbc7-1f33-4ce3-80cc-36e5afe0c11f/volumes"
	Jul 17 19:25:00 addons-747597 kubelet[1547]: I0717 19:25:00.126977    1547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7153c16-e624-4db2-8244-1fb5a2a6991f" path="/var/lib/kubelet/pods/f7153c16-e624-4db2-8244-1fb5a2a6991f/volumes"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.611475    1547 scope.go:117] "RemoveContainer" containerID="f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.631574    1547 scope.go:117] "RemoveContainer" containerID="f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: E0717 19:25:01.631991    1547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b\": container with ID starting with f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b not found: ID does not exist" containerID="f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.632048    1547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b"} err="failed to get container status \"f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b\": rpc error: code = NotFound desc = could not find container \"f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b\": container with ID starting with f6d015a1ea3b43d01a4f65bf4a0c58ec494d2cb42f346e8d7f0d0e6b38c6e34b not found: ID does not exist"
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.645544    1547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncwpb\" (UniqueName: \"kubernetes.io/projected/b8db8990-e740-4420-98d1-f8f1a63f2954-kube-api-access-ncwpb\") pod \"b8db8990-e740-4420-98d1-f8f1a63f2954\" (UID: \"b8db8990-e740-4420-98d1-f8f1a63f2954\") "
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.645604    1547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8db8990-e740-4420-98d1-f8f1a63f2954-webhook-cert\") pod \"b8db8990-e740-4420-98d1-f8f1a63f2954\" (UID: \"b8db8990-e740-4420-98d1-f8f1a63f2954\") "
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.652282    1547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8db8990-e740-4420-98d1-f8f1a63f2954-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b8db8990-e740-4420-98d1-f8f1a63f2954" (UID: "b8db8990-e740-4420-98d1-f8f1a63f2954"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.652292    1547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8db8990-e740-4420-98d1-f8f1a63f2954-kube-api-access-ncwpb" (OuterVolumeSpecName: "kube-api-access-ncwpb") pod "b8db8990-e740-4420-98d1-f8f1a63f2954" (UID: "b8db8990-e740-4420-98d1-f8f1a63f2954"). InnerVolumeSpecName "kube-api-access-ncwpb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.746694    1547 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ncwpb\" (UniqueName: \"kubernetes.io/projected/b8db8990-e740-4420-98d1-f8f1a63f2954-kube-api-access-ncwpb\") on node \"addons-747597\" DevicePath \"\""
	Jul 17 19:25:01 addons-747597 kubelet[1547]: I0717 19:25:01.746744    1547 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8db8990-e740-4420-98d1-f8f1a63f2954-webhook-cert\") on node \"addons-747597\" DevicePath \"\""
	Jul 17 19:25:02 addons-747597 kubelet[1547]: I0717 19:25:02.055536    1547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8db8990-e740-4420-98d1-f8f1a63f2954" path="/var/lib/kubelet/pods/b8db8990-e740-4420-98d1-f8f1a63f2954/volumes"
	Jul 17 19:25:10 addons-747597 kubelet[1547]: I0717 19:25:10.464286    1547 scope.go:117] "RemoveContainer" containerID="2c5f0c15cf3016ec51d39a52aea1710ef0c24c9d8bda91f6563ce95ee554a9fd"
	Jul 17 19:25:10 addons-747597 kubelet[1547]: I0717 19:25:10.485339    1547 scope.go:117] "RemoveContainer" containerID="799d68539952bb580ed18a368930a6d91c2292838b9abcf1e8db0145df490e03"
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.604818    1547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecfedd7e-e869-4dd1-b482-62f0706cc601-tmp-dir\") pod \"ecfedd7e-e869-4dd1-b482-62f0706cc601\" (UID: \"ecfedd7e-e869-4dd1-b482-62f0706cc601\") "
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.604871    1547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjdlp\" (UniqueName: \"kubernetes.io/projected/ecfedd7e-e869-4dd1-b482-62f0706cc601-kube-api-access-qjdlp\") pod \"ecfedd7e-e869-4dd1-b482-62f0706cc601\" (UID: \"ecfedd7e-e869-4dd1-b482-62f0706cc601\") "
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.605442    1547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecfedd7e-e869-4dd1-b482-62f0706cc601-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "ecfedd7e-e869-4dd1-b482-62f0706cc601" (UID: "ecfedd7e-e869-4dd1-b482-62f0706cc601"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.607827    1547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecfedd7e-e869-4dd1-b482-62f0706cc601-kube-api-access-qjdlp" (OuterVolumeSpecName: "kube-api-access-qjdlp") pod "ecfedd7e-e869-4dd1-b482-62f0706cc601" (UID: "ecfedd7e-e869-4dd1-b482-62f0706cc601"). InnerVolumeSpecName "kube-api-access-qjdlp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.705540    1547 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecfedd7e-e869-4dd1-b482-62f0706cc601-tmp-dir\") on node \"addons-747597\" DevicePath \"\""
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.705582    1547 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qjdlp\" (UniqueName: \"kubernetes.io/projected/ecfedd7e-e869-4dd1-b482-62f0706cc601-kube-api-access-qjdlp\") on node \"addons-747597\" DevicePath \"\""
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.923289    1547 scope.go:117] "RemoveContainer" containerID="415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0"
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.939953    1547 scope.go:117] "RemoveContainer" containerID="415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0"
	Jul 17 19:27:48 addons-747597 kubelet[1547]: E0717 19:27:48.940568    1547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0\": container with ID starting with 415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0 not found: ID does not exist" containerID="415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0"
	Jul 17 19:27:48 addons-747597 kubelet[1547]: I0717 19:27:48.940620    1547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0"} err="failed to get container status \"415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0\": rpc error: code = NotFound desc = could not find container \"415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0\": container with ID starting with 415ce64e87ebf40a7ef15eae54682abe3dc30e0f90374e3ccbf81b37053069f0 not found: ID does not exist"
	
	
	==> storage-provisioner [ba3ec42298409c91cd1c4d66012b52dd46b53202134025a7637490e010f9c8f0] <==
	I0717 19:19:11.184047       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:19:11.217395       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:19:11.217543       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:19:11.240416       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:19:11.240963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b50a7a67-0688-4d65-8776-9a699b69aae9", APIVersion:"v1", ResourceVersion:"956", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-747597_c3e3fa88-7124-4d89-bddd-2e8da3968e26 became leader
	I0717 19:19:11.241085       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-747597_c3e3fa88-7124-4d89-bddd-2e8da3968e26!
	I0717 19:19:11.341835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-747597_c3e3fa88-7124-4d89-bddd-2e8da3968e26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-747597 -n addons-747597
helpers_test.go:261: (dbg) Run:  kubectl --context addons-747597 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (319.48s)

                                                
                                    

Test pass (301/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.31
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.30.2/json-events 7.01
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.07
18 TestDownloadOnly/v1.30.2/DeleteAll 0.2
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.31.0-beta.0/json-events 7.04
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.42
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.36
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.55
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 230.51
38 TestAddons/parallel/Registry 15.95
40 TestAddons/parallel/InspektorGadget 11.93
44 TestAddons/parallel/CSI 49.06
45 TestAddons/parallel/Headlamp 13.16
46 TestAddons/parallel/CloudSpanner 6.72
47 TestAddons/parallel/LocalPath 53.71
48 TestAddons/parallel/NvidiaDevicePlugin 6.54
49 TestAddons/parallel/Yakd 5.01
53 TestAddons/serial/GCPAuth/Namespaces 0.19
54 TestAddons/StoppedEnableDisable 12.21
55 TestCertOptions 39.07
56 TestCertExpiration 244.8
58 TestForceSystemdFlag 41.06
59 TestForceSystemdEnv 40.83
65 TestErrorSpam/setup 30.41
66 TestErrorSpam/start 0.7
67 TestErrorSpam/status 1.02
68 TestErrorSpam/pause 1.89
69 TestErrorSpam/unpause 1.78
70 TestErrorSpam/stop 1.44
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 58.6
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 29.77
77 TestFunctional/serial/KubeContext 0.06
78 TestFunctional/serial/KubectlGetPods 0.1
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.23
82 TestFunctional/serial/CacheCmd/cache/add_local 1.07
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
87 TestFunctional/serial/CacheCmd/cache/delete 0.17
88 TestFunctional/serial/MinikubeKubectlCmd 0.17
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
90 TestFunctional/serial/ExtraConfig 35.59
91 TestFunctional/serial/ComponentHealth 0.1
92 TestFunctional/serial/LogsCmd 1.68
93 TestFunctional/serial/LogsFileCmd 1.81
94 TestFunctional/serial/InvalidService 5.19
96 TestFunctional/parallel/ConfigCmd 0.52
97 TestFunctional/parallel/DashboardCmd 9.64
98 TestFunctional/parallel/DryRun 0.41
99 TestFunctional/parallel/InternationalLanguage 0.2
100 TestFunctional/parallel/StatusCmd 1.47
104 TestFunctional/parallel/ServiceCmdConnect 6.83
105 TestFunctional/parallel/AddonsCmd 0.14
106 TestFunctional/parallel/PersistentVolumeClaim 28.15
108 TestFunctional/parallel/SSHCmd 0.58
109 TestFunctional/parallel/CpCmd 2.02
111 TestFunctional/parallel/FileSync 0.37
112 TestFunctional/parallel/CertSync 2.01
116 TestFunctional/parallel/NodeLabels 0.09
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
120 TestFunctional/parallel/License 0.39
121 TestFunctional/parallel/Version/short 0.09
122 TestFunctional/parallel/Version/components 1.36
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.24
128 TestFunctional/parallel/ImageCommands/Setup 0.81
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
134 TestFunctional/parallel/ServiceCmd/DeployApp 13.26
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.37
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
145 TestFunctional/parallel/ServiceCmd/List 0.34
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
148 TestFunctional/parallel/ServiceCmd/Format 0.38
149 TestFunctional/parallel/ServiceCmd/URL 0.36
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
157 TestFunctional/parallel/ProfileCmd/profile_list 0.78
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
159 TestFunctional/parallel/MountCmd/any-port 6.8
160 TestFunctional/parallel/MountCmd/specific-port 2.16
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 188.48
169 TestMultiControlPlane/serial/DeployApp 7.05
170 TestMultiControlPlane/serial/PingHostFromPods 1.63
171 TestMultiControlPlane/serial/AddWorkerNode 36.48
172 TestMultiControlPlane/serial/NodeLabels 0.13
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
174 TestMultiControlPlane/serial/CopyFile 18.91
175 TestMultiControlPlane/serial/StopSecondaryNode 12.77
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.58
177 TestMultiControlPlane/serial/RestartSecondaryNode 32.65
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.33
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 227.13
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.91
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
182 TestMultiControlPlane/serial/StopCluster 35.82
183 TestMultiControlPlane/serial/RestartCluster 122.64
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
185 TestMultiControlPlane/serial/AddSecondaryNode 78.47
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
190 TestJSONOutput/start/Command 58.61
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.72
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.67
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.89
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.21
215 TestKicCustomNetwork/create_custom_network 38.96
216 TestKicCustomNetwork/use_default_bridge_network 36.3
217 TestKicExistingNetwork 35.61
218 TestKicCustomSubnet 34.73
219 TestKicStaticIP 38.13
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 68.6
224 TestMountStart/serial/StartWithMountFirst 7.65
225 TestMountStart/serial/VerifyMountFirst 0.27
226 TestMountStart/serial/StartWithMountSecond 9.15
227 TestMountStart/serial/VerifyMountSecond 0.26
228 TestMountStart/serial/DeleteFirst 1.62
229 TestMountStart/serial/VerifyMountPostDelete 0.25
230 TestMountStart/serial/Stop 1.21
231 TestMountStart/serial/RestartStopped 7.97
232 TestMountStart/serial/VerifyMountPostStop 0.27
235 TestMultiNode/serial/FreshStart2Nodes 89.28
236 TestMultiNode/serial/DeployApp2Nodes 5
237 TestMultiNode/serial/PingHostFrom2Pods 0.98
238 TestMultiNode/serial/AddNode 29.4
239 TestMultiNode/serial/MultiNodeLabels 0.09
240 TestMultiNode/serial/ProfileList 0.34
241 TestMultiNode/serial/CopyFile 10.15
242 TestMultiNode/serial/StopNode 2.27
243 TestMultiNode/serial/StartAfterStop 9.95
244 TestMultiNode/serial/RestartKeepsNodes 87.51
245 TestMultiNode/serial/DeleteNode 5.38
246 TestMultiNode/serial/StopMultiNode 23.86
247 TestMultiNode/serial/RestartMultiNode 56.36
248 TestMultiNode/serial/ValidateNameConflict 36.88
253 TestPreload 131.57
255 TestScheduledStopUnix 110.25
258 TestInsufficientStorage 11.11
259 TestRunningBinaryUpgrade 74.98
261 TestKubernetesUpgrade 394.98
262 TestMissingContainerUpgrade 144.97
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
265 TestNoKubernetes/serial/StartWithK8s 40.76
266 TestNoKubernetes/serial/StartWithStopK8s 20.37
267 TestNoKubernetes/serial/Start 9.87
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
269 TestNoKubernetes/serial/ProfileList 1.25
270 TestNoKubernetes/serial/Stop 1.31
271 TestNoKubernetes/serial/StartNoArgs 7.32
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
273 TestStoppedBinaryUpgrade/Setup 1.18
274 TestStoppedBinaryUpgrade/Upgrade 78.82
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
284 TestPause/serial/Start 64.35
285 TestPause/serial/SecondStartNoReconfiguration 25.37
286 TestPause/serial/Pause 0.76
287 TestPause/serial/VerifyStatus 0.36
288 TestPause/serial/Unpause 0.91
289 TestPause/serial/PauseAgain 0.89
290 TestPause/serial/DeletePaused 2.84
291 TestPause/serial/VerifyDeletedResources 0.34
299 TestNetworkPlugins/group/false 5.04
304 TestStartStop/group/old-k8s-version/serial/FirstStart 179.68
305 TestStartStop/group/old-k8s-version/serial/DeployApp 9.65
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.45
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.44
309 TestStartStop/group/old-k8s-version/serial/Stop 12.17
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
311 TestStartStop/group/old-k8s-version/serial/SecondStart 152.22
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.55
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.34
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.98
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
320 TestStartStop/group/old-k8s-version/serial/Pause 3.08
322 TestStartStop/group/embed-certs/serial/FirstStart 60.71
323 TestStartStop/group/embed-certs/serial/DeployApp 9.36
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
325 TestStartStop/group/embed-certs/serial/Stop 11.96
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
327 TestStartStop/group/embed-certs/serial/SecondStart 279.25
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 3
333 TestStartStop/group/no-preload/serial/FirstStart 65.43
334 TestStartStop/group/no-preload/serial/DeployApp 9.38
335 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
336 TestStartStop/group/no-preload/serial/Stop 11.95
337 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
338 TestStartStop/group/no-preload/serial/SecondStart 302.62
339 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
342 TestStartStop/group/embed-certs/serial/Pause 3.03
344 TestStartStop/group/newest-cni/serial/FirstStart 40.58
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
347 TestStartStop/group/newest-cni/serial/Stop 1.37
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
349 TestStartStop/group/newest-cni/serial/SecondStart 15.86
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
353 TestStartStop/group/newest-cni/serial/Pause 3.21
354 TestNetworkPlugins/group/auto/Start 59.55
355 TestNetworkPlugins/group/auto/KubeletFlags 0.31
356 TestNetworkPlugins/group/auto/NetCatPod 10.28
357 TestNetworkPlugins/group/auto/DNS 0.26
358 TestNetworkPlugins/group/auto/Localhost 0.17
359 TestNetworkPlugins/group/auto/HairPin 0.16
360 TestNetworkPlugins/group/kindnet/Start 59.83
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
363 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
364 TestStartStop/group/no-preload/serial/Pause 3.08
365 TestNetworkPlugins/group/calico/Start 77.65
366 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
368 TestNetworkPlugins/group/kindnet/NetCatPod 11.32
369 TestNetworkPlugins/group/kindnet/DNS 0.23
370 TestNetworkPlugins/group/kindnet/Localhost 0.22
371 TestNetworkPlugins/group/kindnet/HairPin 0.2
372 TestNetworkPlugins/group/custom-flannel/Start 73.01
373 TestNetworkPlugins/group/calico/ControllerPod 6.01
374 TestNetworkPlugins/group/calico/KubeletFlags 0.32
375 TestNetworkPlugins/group/calico/NetCatPod 13.31
376 TestNetworkPlugins/group/calico/DNS 0.21
377 TestNetworkPlugins/group/calico/Localhost 0.21
378 TestNetworkPlugins/group/calico/HairPin 0.23
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
381 TestNetworkPlugins/group/enable-default-cni/Start 85.52
382 TestNetworkPlugins/group/custom-flannel/DNS 0.28
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
385 TestNetworkPlugins/group/flannel/Start 68.73
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.37
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
391 TestNetworkPlugins/group/flannel/ControllerPod 6.01
392 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
393 TestNetworkPlugins/group/flannel/NetCatPod 11.35
394 TestNetworkPlugins/group/bridge/Start 59.49
395 TestNetworkPlugins/group/flannel/DNS 0.22
396 TestNetworkPlugins/group/flannel/Localhost 0.2
397 TestNetworkPlugins/group/flannel/HairPin 0.22
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
399 TestNetworkPlugins/group/bridge/NetCatPod 10.25
400 TestNetworkPlugins/group/bridge/DNS 0.18
401 TestNetworkPlugins/group/bridge/Localhost 0.14
402 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (8.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-186638 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-186638 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.31115698s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-186638
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-186638: exit status 85 (75.750299ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-186638 | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |          |
	|         | -p download-only-186638        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:16:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:16:55.943920  595153 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:16:55.944116  595153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:16:55.944128  595153 out.go:304] Setting ErrFile to fd 2...
	I0717 19:16:55.944133  595153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:16:55.944398  595153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	W0717 19:16:55.944552  595153 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19283-589755/.minikube/config/config.json: open /home/jenkins/minikube-integration/19283-589755/.minikube/config/config.json: no such file or directory
	I0717 19:16:55.944994  595153 out.go:298] Setting JSON to true
	I0717 19:16:55.945860  595153 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10759,"bootTime":1721233057,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0717 19:16:55.945929  595153 start.go:139] virtualization:  
	I0717 19:16:55.949168  595153 out.go:97] [download-only-186638] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0717 19:16:55.949400  595153 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 19:16:55.949457  595153 notify.go:220] Checking for updates...
	I0717 19:16:55.951649  595153 out.go:169] MINIKUBE_LOCATION=19283
	I0717 19:16:55.955267  595153 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:16:55.957885  595153 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:16:55.960493  595153 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	I0717 19:16:55.962978  595153 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 19:16:55.968260  595153 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 19:16:55.968516  595153 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:16:55.989287  595153 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 19:16:55.989380  595153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:16:56.059562  595153 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-07-17 19:16:56.048860951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:16:56.059677  595153 docker.go:307] overlay module found
	I0717 19:16:56.062213  595153 out.go:97] Using the docker driver based on user configuration
	I0717 19:16:56.062243  595153 start.go:297] selected driver: docker
	I0717 19:16:56.062250  595153 start.go:901] validating driver "docker" against <nil>
	I0717 19:16:56.062375  595153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:16:56.113106  595153 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-07-17 19:16:56.104443315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:16:56.113287  595153 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 19:16:56.113579  595153 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 19:16:56.113734  595153 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 19:16:56.117133  595153 out.go:169] Using Docker driver with root privileges
	I0717 19:16:56.120647  595153 cni.go:84] Creating CNI manager for ""
	I0717 19:16:56.120668  595153 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:16:56.120679  595153 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 19:16:56.120764  595153 start.go:340] cluster config:
	{Name:download-only-186638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-186638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:16:56.123534  595153 out.go:97] Starting "download-only-186638" primary control-plane node in "download-only-186638" cluster
	I0717 19:16:56.123555  595153 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 19:16:56.125619  595153 out.go:97] Pulling base image v0.0.44-1721146479-19264 ...
	I0717 19:16:56.125646  595153 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:16:56.125743  595153 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 19:16:56.141670  595153 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 19:16:56.142457  595153 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 19:16:56.142571  595153 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 19:16:56.189378  595153 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0717 19:16:56.189412  595153 cache.go:56] Caching tarball of preloaded images
	I0717 19:16:56.190152  595153 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:16:56.193439  595153 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 19:16:56.193458  595153 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 19:16:56.305558  595153 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-186638 host does not exist
	  To start a cluster, run: "minikube start -p download-only-186638"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-186638
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (7.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-639410 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-639410 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.008825059s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (7.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-639410
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-639410: exit status 85 (73.855416ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-186638 | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | -p download-only-186638        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| delete  | -p download-only-186638        | download-only-186638 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | -o=json --download-only        | download-only-639410 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | -p download-only-639410        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:17:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:17:04.699493  595360 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:17:04.699824  595360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:04.699840  595360 out.go:304] Setting ErrFile to fd 2...
	I0717 19:17:04.699847  595360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:04.700149  595360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:17:04.700646  595360 out.go:298] Setting JSON to true
	I0717 19:17:04.701649  595360 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10768,"bootTime":1721233057,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0717 19:17:04.701735  595360 start.go:139] virtualization:  
	I0717 19:17:04.704224  595360 out.go:97] [download-only-639410] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 19:17:04.704573  595360 notify.go:220] Checking for updates...
	I0717 19:17:04.706338  595360 out.go:169] MINIKUBE_LOCATION=19283
	I0717 19:17:04.708841  595360 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:17:04.711029  595360 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:17:04.713212  595360 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	I0717 19:17:04.715152  595360 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 19:17:04.718874  595360 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 19:17:04.719307  595360 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:17:04.742331  595360 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 19:17:04.742506  595360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:04.809908  595360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 19:17:04.797298626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:17:04.810033  595360 docker.go:307] overlay module found
	I0717 19:17:04.812287  595360 out.go:97] Using the docker driver based on user configuration
	I0717 19:17:04.812330  595360 start.go:297] selected driver: docker
	I0717 19:17:04.812338  595360 start.go:901] validating driver "docker" against <nil>
	I0717 19:17:04.812471  595360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:04.877369  595360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 19:17:04.867667945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:17:04.877557  595360 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 19:17:04.877899  595360 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 19:17:04.878088  595360 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 19:17:04.880469  595360 out.go:169] Using Docker driver with root privileges
	I0717 19:17:04.882571  595360 cni.go:84] Creating CNI manager for ""
	I0717 19:17:04.882606  595360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:04.882619  595360 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 19:17:04.882728  595360 start.go:340] cluster config:
	{Name:download-only-639410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-639410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:17:04.884915  595360 out.go:97] Starting "download-only-639410" primary control-plane node in "download-only-639410" cluster
	I0717 19:17:04.884958  595360 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 19:17:04.887093  595360 out.go:97] Pulling base image v0.0.44-1721146479-19264 ...
	I0717 19:17:04.887136  595360 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:17:04.887267  595360 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 19:17:04.904467  595360 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 19:17:04.904610  595360 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 19:17:04.904638  595360 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 19:17:04.904647  595360 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 19:17:04.904656  595360 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	I0717 19:17:04.964573  595360 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0717 19:17:04.964610  595360 cache.go:56] Caching tarball of preloaded images
	I0717 19:17:04.964792  595360 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:17:04.966980  595360 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 19:17:04.967016  595360 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 ...
	I0717 19:17:05.083431  595360 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:e4bf0ba8584d1a2d67dbb103edb83dd1 -> /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-639410 host does not exist
	  To start a cluster, run: "minikube start -p download-only-639410"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-639410
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (7.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-902211 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-902211 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.035548989s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (7.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-902211
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-902211: exit status 85 (423.876385ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-186638 | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | -p download-only-186638             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| delete  | -p download-only-186638             | download-only-186638 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | -o=json --download-only             | download-only-639410 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | -p download-only-639410             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| delete  | -p download-only-639410             | download-only-639410 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | -o=json --download-only             | download-only-902211 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | -p download-only-902211             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:17:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:17:12.132601  595563 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:17:12.132781  595563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:12.132794  595563 out.go:304] Setting ErrFile to fd 2...
	I0717 19:17:12.132799  595563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:12.133043  595563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:17:12.133456  595563 out.go:298] Setting JSON to true
	I0717 19:17:12.134351  595563 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10776,"bootTime":1721233057,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0717 19:17:12.134424  595563 start.go:139] virtualization:  
	I0717 19:17:12.137184  595563 out.go:97] [download-only-902211] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 19:17:12.137360  595563 notify.go:220] Checking for updates...
	I0717 19:17:12.139350  595563 out.go:169] MINIKUBE_LOCATION=19283
	I0717 19:17:12.141487  595563 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:17:12.143205  595563 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:17:12.145042  595563 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	I0717 19:17:12.146478  595563 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 19:17:12.149875  595563 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 19:17:12.150181  595563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:17:12.177616  595563 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 19:17:12.177735  595563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:12.252339  595563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 19:17:12.241850823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:17:12.252457  595563 docker.go:307] overlay module found
	I0717 19:17:12.254630  595563 out.go:97] Using the docker driver based on user configuration
	I0717 19:17:12.254658  595563 start.go:297] selected driver: docker
	I0717 19:17:12.254664  595563 start.go:901] validating driver "docker" against <nil>
	I0717 19:17:12.254844  595563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:12.308227  595563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-17 19:17:12.299318048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:17:12.308386  595563 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 19:17:12.308710  595563 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 19:17:12.308873  595563 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 19:17:12.311027  595563 out.go:169] Using Docker driver with root privileges
	I0717 19:17:12.312856  595563 cni.go:84] Creating CNI manager for ""
	I0717 19:17:12.312879  595563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:12.312891  595563 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 19:17:12.312978  595563 start.go:340] cluster config:
	{Name:download-only-902211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-902211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:17:12.315007  595563 out.go:97] Starting "download-only-902211" primary control-plane node in "download-only-902211" cluster
	I0717 19:17:12.315027  595563 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 19:17:12.316648  595563 out.go:97] Pulling base image v0.0.44-1721146479-19264 ...
	I0717 19:17:12.316675  595563 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:17:12.316786  595563 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 19:17:12.332013  595563 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 19:17:12.332153  595563 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 19:17:12.332176  595563 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 19:17:12.332185  595563 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 19:17:12.332193  595563 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	I0717 19:17:12.380080  595563 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0717 19:17:12.380107  595563 cache.go:56] Caching tarball of preloaded images
	I0717 19:17:12.380270  595563 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:17:12.382648  595563 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 19:17:12.382678  595563 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 19:17:12.499551  595563 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:70b5971c257ae4defe1f5d041a04e29c -> /home/jenkins/minikube-integration/19283-589755/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-902211 host does not exist
	  To start a cluster, run: "minikube start -p download-only-902211"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-902211
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-794463 --alsologtostderr --binary-mirror http://127.0.0.1:35105 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-794463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-794463
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-747597
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-747597: exit status 85 (65.723398ms)

                                                
                                                
-- stdout --
	* Profile "addons-747597" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-747597"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-747597
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-747597: exit status 85 (72.636519ms)

                                                
                                                
-- stdout --
	* Profile "addons-747597" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-747597"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (230.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-747597 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-747597 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m50.50844761s)
--- PASS: TestAddons/Setup (230.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 54.402007ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-4kkkf" [9820910e-bb3a-48fe-b2d1-5c69c2b66429] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005956874s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qczlm" [dc1faa8a-6f1b-41a9-b047-b18156274ad5] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006120212s
addons_test.go:342: (dbg) Run:  kubectl --context addons-747597 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-747597 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-747597 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.687368065s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 ip
2024/07/17 19:21:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.95s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-l9fz8" [c7d82bc2-dbae-4517-9d25-aebfb1795e42] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.016707661s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-747597
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-747597: (5.907901962s)
--- PASS: TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 6.928182ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-747597 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-747597 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [083969eb-3595-4d80-9ca5-aa6ab12d4d74] Pending
helpers_test.go:344: "task-pv-pod" [083969eb-3595-4d80-9ca5-aa6ab12d4d74] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [083969eb-3595-4d80-9ca5-aa6ab12d4d74] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003824773s
addons_test.go:586: (dbg) Run:  kubectl --context addons-747597 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-747597 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-747597 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-747597 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-747597 delete pod task-pv-pod: (1.171857312s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-747597 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-747597 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-747597 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b7a27919-36fd-4616-ba36-a83afc8fa10e] Pending
helpers_test.go:344: "task-pv-pod-restore" [b7a27919-36fd-4616-ba36-a83afc8fa10e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b7a27919-36fd-4616-ba36-a83afc8fa10e] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003575365s
addons_test.go:628: (dbg) Run:  kubectl --context addons-747597 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-747597 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-747597 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-747597 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.746890947s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-747597 addons disable volumesnapshots --alsologtostderr -v=1: (1.009157125s)
--- PASS: TestAddons/parallel/CSI (49.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-747597 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-747597 --alsologtostderr -v=1: (1.151759656s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-g6rr2" [65d2c949-30a3-41b4-88ca-efc67df9fc52] Pending
helpers_test.go:344: "headlamp-7867546754-g6rr2" [65d2c949-30a3-41b4-88ca-efc67df9fc52] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-g6rr2" [65d2c949-30a3-41b4-88ca-efc67df9fc52] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003548462s
--- PASS: TestAddons/parallel/Headlamp (13.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-zhb45" [54c72fe6-ed21-47b5-a096-6a3b07054815] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00608379s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-747597
--- PASS: TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-747597 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-747597 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747597 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c6f4bdb6-fdd1-4cdb-8c8c-92aa9274e5d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c6f4bdb6-fdd1-4cdb-8c8c-92aa9274e5d3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c6f4bdb6-fdd1-4cdb-8c8c-92aa9274e5d3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004157953s
addons_test.go:992: (dbg) Run:  kubectl --context addons-747597 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 ssh "cat /opt/local-path-provisioner/pvc-e6f3f4fe-8b6e-4e46-a13c-533c45ae5ad4_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-747597 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-747597 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-747597 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-747597 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.594430614s)
--- PASS: TestAddons/parallel/LocalPath (53.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8tq66" [e1a33d1c-572f-4efa-b24a-abffc419c427] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008439649s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-747597
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-ftstw" [e8920e76-d828-49b1-b0f4-bc2a7dc2866a] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004377216s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-747597 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-747597 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-747597
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-747597: (11.936967318s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-747597
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-747597
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-747597
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (39.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-318746 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-318746 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.071318241s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-318746 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-318746 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-318746 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-318746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-318746
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-318746: (2.172661782s)
--- PASS: TestCertOptions (39.07s)

                                                
                                    
x
+
TestCertExpiration (244.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-422733 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-422733 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.822201222s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-422733 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-422733 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.520799258s)
helpers_test.go:175: Cleaning up "cert-expiration-422733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-422733
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-422733: (2.447086465s)
--- PASS: TestCertExpiration (244.80s)

                                                
                                    
x
+
TestForceSystemdFlag (41.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-209688 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-209688 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.879317903s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-209688 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-209688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-209688
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-209688: (2.840577173s)
--- PASS: TestForceSystemdFlag (41.06s)

                                                
                                    
x
+
TestForceSystemdEnv (40.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-886066 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0717 20:09:37.595531  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-886066 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.011834597s)
helpers_test.go:175: Cleaning up "force-systemd-env-886066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-886066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-886066: (2.820322999s)
--- PASS: TestForceSystemdEnv (40.83s)

                                                
                                    
x
+
TestErrorSpam/setup (30.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-492382 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-492382 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-492382 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-492382 --driver=docker  --container-runtime=crio: (30.405976469s)
--- PASS: TestErrorSpam/setup (30.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 stop: (1.245543412s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-492382 --log_dir /tmp/nospam-492382 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19283-589755/.minikube/files/etc/test/nested/copy/595147/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-815404 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-815404 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (58.596731431s)
--- PASS: TestFunctional/serial/StartWithProxy (58.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-815404 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-815404 --alsologtostderr -v=8: (29.760828142s)
functional_test.go:659: soft start took 29.765331243s for "functional-815404" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-815404 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 cache add registry.k8s.io/pause:3.1: (1.44720505s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 cache add registry.k8s.io/pause:3.3: (1.454511809s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 cache add registry.k8s.io/pause:latest: (1.326390137s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-815404 /tmp/TestFunctionalserialCacheCmdcacheadd_local3397253031/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cache add minikube-local-cache-test:functional-815404
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cache delete minikube-local-cache-test:functional-815404
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-815404
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.199022ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 kubectl -- --context functional-815404 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-815404 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-815404 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 19:31:12.195562  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:12.202779  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:12.213154  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:12.233513  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:12.273861  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:12.354191  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:12.514653  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:12.835153  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:13.476069  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:14.756402  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:31:17.317211  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-815404 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.594151445s)
functional_test.go:757: restart took 35.594280364s for "functional-815404" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-815404 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 logs
E0717 19:31:22.437367  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 logs: (1.679579032s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 logs --file /tmp/TestFunctionalserialLogsFileCmd3521291228/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 logs --file /tmp/TestFunctionalserialLogsFileCmd3521291228/001/logs.txt: (1.806868413s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-815404 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-815404
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-815404: exit status 115 (791.678648ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30797 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-815404 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-815404 delete -f testdata/invalidsvc.yaml: (1.137396113s)
--- PASS: TestFunctional/serial/InvalidService (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 config get cpus: exit status 14 (72.61059ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 config get cpus: exit status 14 (81.185851ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-815404 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-815404 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 624328: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-815404 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-815404 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (180.784412ms)

                                                
                                                
-- stdout --
	* [functional-815404] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:32:12.275695  624050 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:32:12.275903  624050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:32:12.275953  624050 out.go:304] Setting ErrFile to fd 2...
	I0717 19:32:12.275973  624050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:32:12.276268  624050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:32:12.276735  624050 out.go:298] Setting JSON to false
	I0717 19:32:12.277793  624050 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11676,"bootTime":1721233057,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0717 19:32:12.277890  624050 start.go:139] virtualization:  
	I0717 19:32:12.281086  624050 out.go:177] * [functional-815404] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 19:32:12.283678  624050 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 19:32:12.283771  624050 notify.go:220] Checking for updates...
	I0717 19:32:12.287798  624050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:32:12.290060  624050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:32:12.292465  624050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	I0717 19:32:12.295046  624050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 19:32:12.297135  624050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:32:12.299784  624050 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:32:12.300347  624050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:32:12.322308  624050 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 19:32:12.322424  624050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:32:12.381941  624050 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-17 19:32:12.371197165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:32:12.382051  624050 docker.go:307] overlay module found
	I0717 19:32:12.384512  624050 out.go:177] * Using the docker driver based on existing profile
	I0717 19:32:12.386788  624050 start.go:297] selected driver: docker
	I0717 19:32:12.386801  624050 start.go:901] validating driver "docker" against &{Name:functional-815404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-815404 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:32:12.386932  624050 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:32:12.389645  624050 out.go:177] 
	W0717 19:32:12.391771  624050 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 19:32:12.393614  624050 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-815404 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-815404 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-815404 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.799ms)

                                                
                                                
-- stdout --
	* [functional-815404] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:32:12.677525  624160 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:32:12.677658  624160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:32:12.677669  624160 out.go:304] Setting ErrFile to fd 2...
	I0717 19:32:12.677673  624160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:32:12.678012  624160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:32:12.678389  624160 out.go:298] Setting JSON to false
	I0717 19:32:12.679298  624160 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11676,"bootTime":1721233057,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0717 19:32:12.679398  624160 start.go:139] virtualization:  
	I0717 19:32:12.684493  624160 out.go:177] * [functional-815404] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0717 19:32:12.687589  624160 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 19:32:12.687648  624160 notify.go:220] Checking for updates...
	I0717 19:32:12.695131  624160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:32:12.699500  624160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 19:32:12.702426  624160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	I0717 19:32:12.706516  624160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 19:32:12.709257  624160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:32:12.712311  624160 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:32:12.712857  624160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:32:12.749919  624160 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 19:32:12.750053  624160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:32:12.814372  624160 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-17 19:32:12.803463024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:32:12.814490  624160 docker.go:307] overlay module found
	I0717 19:32:12.817025  624160 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 19:32:12.819159  624160 start.go:297] selected driver: docker
	I0717 19:32:12.819191  624160 start.go:901] validating driver "docker" against &{Name:functional-815404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-815404 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:32:12.819333  624160 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:32:12.822653  624160 out.go:177] 
	W0717 19:32:12.824893  624160 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 19:32:12.826847  624160 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-815404 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-815404 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-fzffn" [aa0d7cc1-342f-42e5-af1c-a218db870a59] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-fzffn" [aa0d7cc1-342f-42e5-af1c-a218db870a59] Running
E0717 19:31:53.158791  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.005062741s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31859
functional_test.go:1671: http://192.168.49.2:31859: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-fzffn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31859
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2e79f0c0-7659-4100-965b-8e1cb2f28414] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004550617s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-815404 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-815404 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-815404 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-815404 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4d052cd8-e803-4058-b255-a7f9818fb864] Pending
helpers_test.go:344: "sp-pod" [4d052cd8-e803-4058-b255-a7f9818fb864] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4d052cd8-e803-4058-b255-a7f9818fb864] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003456435s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-815404 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-815404 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-815404 delete -f testdata/storage-provisioner/pod.yaml: (1.046409593s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-815404 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bc876984-6437-48f9-8d9a-88c25e75d536] Pending
helpers_test.go:344: "sp-pod" [bc876984-6437-48f9-8d9a-88c25e75d536] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bc876984-6437-48f9-8d9a-88c25e75d536] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004563268s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-815404 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh -n functional-815404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cp functional-815404:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1731979941/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh -n functional-815404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh -n functional-815404 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/595147/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo cat /etc/test/nested/copy/595147/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/595147.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo cat /etc/ssl/certs/595147.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/595147.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo cat /usr/share/ca-certificates/595147.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0717 19:31:32.678239  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /etc/ssl/certs/5951472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo cat /etc/ssl/certs/5951472.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/5951472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo cat /usr/share/ca-certificates/5951472.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-815404 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 ssh "sudo systemctl is-active docker": exit status 1 (353.786701ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 ssh "sudo systemctl is-active containerd": exit status 1 (374.095107ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 version -o=json --components: (1.363715246s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-815404 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kicbase/echo-server:functional-815404
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-815404 image ls --format short --alsologtostderr:
I0717 19:32:19.600915  624675 out.go:291] Setting OutFile to fd 1 ...
I0717 19:32:19.601197  624675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:19.601214  624675 out.go:304] Setting ErrFile to fd 2...
I0717 19:32:19.601221  624675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:19.601479  624675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
I0717 19:32:19.602102  624675 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:19.602213  624675 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:19.602735  624675 cli_runner.go:164] Run: docker container inspect functional-815404 --format={{.State.Status}}
I0717 19:32:19.633606  624675 ssh_runner.go:195] Run: systemctl --version
I0717 19:32:19.633698  624675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-815404
I0717 19:32:19.653183  624675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/functional-815404/id_rsa Username:docker}
I0717 19:32:19.764897  624675 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-815404 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | 89d73d416b992 | 62MB   |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5e32961ddcea3 | 90.3MB |
| docker.io/library/nginx                 | alpine             | 5461b18aaccf3 | 46.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kicbase/echo-server           | functional-815404  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e1dcc3400d3ea | 108MB  |
| docker.io/library/nginx                 | latest             | 443d199e8bfcc | 197MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/my-image                      | functional-815404  | 8d9694382a5c7 | 1.64MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-proxy              | v1.30.2            | 66dbb96a9149f | 89.2MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 84c601f3f72c8 | 114MB  |
| registry.k8s.io/kube-scheduler          | v1.30.2            | c7dd04b1bafeb | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-815404 image ls --format table --alsologtostderr:
I0717 19:32:22.815127  625031 out.go:291] Setting OutFile to fd 1 ...
I0717 19:32:22.815285  625031 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:22.815298  625031 out.go:304] Setting ErrFile to fd 2...
I0717 19:32:22.815303  625031 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:22.815599  625031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
I0717 19:32:22.816376  625031 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:22.816513  625031 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:22.819498  625031 cli_runner.go:164] Run: docker container inspect functional-815404 --format={{.State.Status}}
I0717 19:32:22.845407  625031 ssh_runner.go:195] Run: systemctl --version
I0717 19:32:22.845467  625031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-815404
I0717 19:32:22.864417  625031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/functional-815404/id_rsa Username:docker}
I0717 19:32:22.964017  625031 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-815404 image ls --format json --alsologtostderr:
[{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:8ddc81caccc97ada7e3c53ebe2c03240f25cd123c479752a1c314c402b972028"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"108229958"},{"id":"c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:96a3e2d1761583447d4ae302128b4956b855d14
cdd5bf9ed4637d8b9f0c74a27"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"61568326"},{"id":"84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d","registry.k8s.io/kube-apiserver@sha256:74ea4e3a814490ffe1a66434837aea1e73006d559b65a6321f3e41fc105845b7"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"113538528"},{"id":"66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae","repoDigests":["registry.k8s.io/kube-proxy@sha256:7df12f2b1bad9a90a39a1ca558501a4ba66b8943df1d5f2438788aa15c9d23ef","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"89199511"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repo
Tags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:functional-815404"],"size":"4788229"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2437c
f762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec
695e0b654c40","repoDigests":["docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"62007858"},{"id":"bfc1c3261619e9a1e2a8c59bce5055a8ca0051e0d4b3c3a5445ec371acd7a79e","repoDigests":["docker.io/library/e01a476f5e949c573c9a593b53b144c8c75f35f3a543bcf3a020c4fd72453717-tmp@sha256:d79b6232284151bf22a5f1bb84064a801265cc263380ff5c95df60579ed5a0ad"],"repoTags":[],"size":"1637643"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":["docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55","docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62"],"repoTags":["docker.io/library/nginx:alpine"],"size":"46671377"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDi
gests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e"],"repoTags":["docker.io/library/nginx:latest"],"size":"197104786"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493","docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"90278450"},{"id":"1611cd07b61d57
dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e0
0b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-815404 image ls --format json --alsologtostderr:
I0717 19:32:22.522509  624966 out.go:291] Setting OutFile to fd 1 ...
I0717 19:32:22.522679  624966 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:22.522708  624966 out.go:304] Setting ErrFile to fd 2...
I0717 19:32:22.522728  624966 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:22.523157  624966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
I0717 19:32:22.524969  624966 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:22.525179  624966 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:22.525743  624966 cli_runner.go:164] Run: docker container inspect functional-815404 --format={{.State.Status}}
I0717 19:32:22.547212  624966 ssh_runner.go:195] Run: systemctl --version
I0717 19:32:22.548170  624966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-815404
I0717 19:32:22.567888  624966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/functional-815404/id_rsa Username:docker}
I0717 19:32:22.672428  624966 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-815404 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:96a3e2d1761583447d4ae302128b4956b855d14cdd5bf9ed4637d8b9f0c74a27
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "61568326"
- id: e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:8ddc81caccc97ada7e3c53ebe2c03240f25cd123c479752a1c314c402b972028
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "108229958"
- id: 5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
- docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "90278450"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests:
- docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55
- docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62
repoTags:
- docker.io/library/nginx:alpine
size: "46671377"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e
repoTags:
- docker.io/library/nginx:latest
size: "197104786"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7df12f2b1bad9a90a39a1ca558501a4ba66b8943df1d5f2438788aa15c9d23ef
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "89199511"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:functional-815404
size: "4788229"
- id: 89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40
repoDigests:
- docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "62007858"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
- registry.k8s.io/kube-apiserver@sha256:74ea4e3a814490ffe1a66434837aea1e73006d559b65a6321f3e41fc105845b7
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "113538528"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-815404 image ls --format yaml --alsologtostderr:
I0717 19:32:19.862621  624708 out.go:291] Setting OutFile to fd 1 ...
I0717 19:32:19.862761  624708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:19.862773  624708 out.go:304] Setting ErrFile to fd 2...
I0717 19:32:19.862776  624708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:19.863023  624708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
I0717 19:32:19.863670  624708 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:19.863794  624708 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:19.864254  624708 cli_runner.go:164] Run: docker container inspect functional-815404 --format={{.State.Status}}
I0717 19:32:19.884340  624708 ssh_runner.go:195] Run: systemctl --version
I0717 19:32:19.884402  624708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-815404
I0717 19:32:19.905112  624708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/functional-815404/id_rsa Username:docker}
I0717 19:32:20.000140  624708 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 ssh pgrep buildkitd: exit status 1 (371.092069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image build -t localhost/my-image:functional-815404 testdata/build --alsologtostderr
2024/07/17 19:32:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 image build -t localhost/my-image:functional-815404 testdata/build --alsologtostderr: (2.629949169s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-815404 image build -t localhost/my-image:functional-815404 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bfc1c326161
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-815404
--> 8d9694382a5
Successfully tagged localhost/my-image:functional-815404
8d9694382a5c77a02ddf048125f370b2405c1e8b288b1510362afee4e82ce1fe
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-815404 image build -t localhost/my-image:functional-815404 testdata/build --alsologtostderr:
I0717 19:32:20.516714  624800 out.go:291] Setting OutFile to fd 1 ...
I0717 19:32:20.517590  624800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:20.517613  624800 out.go:304] Setting ErrFile to fd 2...
I0717 19:32:20.517621  624800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 19:32:20.518358  624800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
I0717 19:32:20.519154  624800 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:20.519962  624800 config.go:182] Loaded profile config "functional-815404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 19:32:20.520617  624800 cli_runner.go:164] Run: docker container inspect functional-815404 --format={{.State.Status}}
I0717 19:32:20.541138  624800 ssh_runner.go:195] Run: systemctl --version
I0717 19:32:20.541192  624800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-815404
I0717 19:32:20.570885  624800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/functional-815404/id_rsa Username:docker}
I0717 19:32:20.681061  624800 build_images.go:161] Building image from path: /tmp/build.3566817826.tar
I0717 19:32:20.681126  624800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 19:32:20.705433  624800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3566817826.tar
I0717 19:32:20.711186  624800 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3566817826.tar: stat -c "%s %y" /var/lib/minikube/build/build.3566817826.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3566817826.tar': No such file or directory
I0717 19:32:20.711219  624800 ssh_runner.go:362] scp /tmp/build.3566817826.tar --> /var/lib/minikube/build/build.3566817826.tar (3072 bytes)
I0717 19:32:20.744387  624800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3566817826
I0717 19:32:20.754764  624800 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3566817826 -xf /var/lib/minikube/build/build.3566817826.tar
I0717 19:32:20.765836  624800 crio.go:315] Building image: /var/lib/minikube/build/build.3566817826
I0717 19:32:20.765918  624800 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-815404 /var/lib/minikube/build/build.3566817826 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0717 19:32:23.041107  624800 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-815404 /var/lib/minikube/build/build.3566817826 --cgroup-manager=cgroupfs: (2.275160281s)
I0717 19:32:23.041171  624800 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3566817826
I0717 19:32:23.050405  624800 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3566817826.tar
I0717 19:32:23.061134  624800 build_images.go:217] Built localhost/my-image:functional-815404 from /tmp/build.3566817826.tar
I0717 19:32:23.061168  624800 build_images.go:133] succeeded building to: functional-815404
I0717 19:32:23.061174  624800 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-815404
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image load --daemon docker.io/kicbase/echo-server:functional-815404 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 image load --daemon docker.io/kicbase/echo-server:functional-815404 --alsologtostderr: (1.25800799s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image load --daemon docker.io/kicbase/echo-server:functional-815404 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-815404 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-815404 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-qmv7d" [2dd52abb-00b8-4b38-9351-0153f6218b51] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-qmv7d" [2dd52abb-00b8-4b38-9351-0153f6218b51] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.004108018s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-815404
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image load --daemon docker.io/kicbase/echo-server:functional-815404 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image save docker.io/kicbase/echo-server:functional-815404 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image rm docker.io/kicbase/echo-server:functional-815404 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-815404
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 image save --daemon docker.io/kicbase/echo-server:functional-815404 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-815404 image save --daemon docker.io/kicbase/echo-server:functional-815404 --alsologtostderr: (2.321347394s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-815404
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-815404 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-815404 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-815404 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 621223: os: process already finished
helpers_test.go:502: unable to terminate pid 621109: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-815404 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-815404 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-815404 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5654ad4c-f0b7-4c7c-96b5-d3f090ede315] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5654ad4c-f0b7-4c7c-96b5-d3f090ede315] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.006257656s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 service list -o json
functional_test.go:1490: Took "341.316588ms" to run "out/minikube-linux-arm64 -p functional-815404 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32036
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32036
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-815404 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.29.175 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-815404 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "700.03985ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "81.390108ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "451.488969ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "68.646292ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdany-port4063795094/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721244721300715367" to /tmp/TestFunctionalparallelMountCmdany-port4063795094/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721244721300715367" to /tmp/TestFunctionalparallelMountCmdany-port4063795094/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721244721300715367" to /tmp/TestFunctionalparallelMountCmdany-port4063795094/001/test-1721244721300715367
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (458.253865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 19:32 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 19:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 19:32 test-1721244721300715367
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh cat /mount-9p/test-1721244721300715367
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-815404 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ace08b8f-91fd-4f54-80b1-f5c69c94c7cf] Pending
helpers_test.go:344: "busybox-mount" [ace08b8f-91fd-4f54-80b1-f5c69c94c7cf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ace08b8f-91fd-4f54-80b1-f5c69c94c7cf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ace08b8f-91fd-4f54-80b1-f5c69c94c7cf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004390276s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-815404 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdany-port4063795094/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdspecific-port514535088/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.440588ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdspecific-port514535088/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 ssh "sudo umount -f /mount-9p": exit status 1 (316.916737ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-815404 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdspecific-port514535088/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2149277187/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2149277187/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2149277187/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T" /mount1: exit status 1 (647.542185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-815404 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-815404 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2149277187/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2149277187/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-815404 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2149277187/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-815404
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-815404
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-815404
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (188.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-420773 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 19:32:34.119076  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:33:56.039322  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-420773 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m7.696561879s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (188.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-420773 -- rollout status deployment/busybox: (4.084537802s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-2mzvm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-9zbxz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-wdhsc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-2mzvm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-9zbxz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-wdhsc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-2mzvm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-9zbxz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-wdhsc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-2mzvm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-2mzvm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-9zbxz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-9zbxz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-wdhsc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-420773 -- exec busybox-fc5497c4f-wdhsc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-420773 -v=7 --alsologtostderr
E0717 19:36:12.196483  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-420773 -v=7 --alsologtostderr: (35.479640726s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-420773 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-420773 status --output json -v=7 --alsologtostderr: (1.004242395s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp testdata/cp-test.txt ha-420773:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile501601418/001/cp-test_ha-420773.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773:/home/docker/cp-test.txt ha-420773-m02:/home/docker/cp-test_ha-420773_ha-420773-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m02 "sudo cat /home/docker/cp-test_ha-420773_ha-420773-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773:/home/docker/cp-test.txt ha-420773-m03:/home/docker/cp-test_ha-420773_ha-420773-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m03 "sudo cat /home/docker/cp-test_ha-420773_ha-420773-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773:/home/docker/cp-test.txt ha-420773-m04:/home/docker/cp-test_ha-420773_ha-420773-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m04 "sudo cat /home/docker/cp-test_ha-420773_ha-420773-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp testdata/cp-test.txt ha-420773-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile501601418/001/cp-test_ha-420773-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m02:/home/docker/cp-test.txt ha-420773:/home/docker/cp-test_ha-420773-m02_ha-420773.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773 "sudo cat /home/docker/cp-test_ha-420773-m02_ha-420773.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m02:/home/docker/cp-test.txt ha-420773-m03:/home/docker/cp-test_ha-420773-m02_ha-420773-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m03 "sudo cat /home/docker/cp-test_ha-420773-m02_ha-420773-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m02:/home/docker/cp-test.txt ha-420773-m04:/home/docker/cp-test_ha-420773-m02_ha-420773-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m04 "sudo cat /home/docker/cp-test_ha-420773-m02_ha-420773-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp testdata/cp-test.txt ha-420773-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile501601418/001/cp-test_ha-420773-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m03:/home/docker/cp-test.txt ha-420773:/home/docker/cp-test_ha-420773-m03_ha-420773.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773 "sudo cat /home/docker/cp-test_ha-420773-m03_ha-420773.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m03:/home/docker/cp-test.txt ha-420773-m02:/home/docker/cp-test_ha-420773-m03_ha-420773-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m02 "sudo cat /home/docker/cp-test_ha-420773-m03_ha-420773-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m03:/home/docker/cp-test.txt ha-420773-m04:/home/docker/cp-test_ha-420773-m03_ha-420773-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m03 "sudo cat /home/docker/cp-test.txt"
E0717 19:36:34.544005  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:36:34.549275  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:36:34.559568  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:36:34.579869  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:36:34.620075  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:36:34.700380  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m04 "sudo cat /home/docker/cp-test_ha-420773-m03_ha-420773-m04.txt"
E0717 19:36:34.861209  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:36:35.181490  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp testdata/cp-test.txt ha-420773-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile501601418/001/cp-test_ha-420773-m04.txt
E0717 19:36:35.826863  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m04:/home/docker/cp-test.txt ha-420773:/home/docker/cp-test_ha-420773-m04_ha-420773.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m04 "sudo cat /home/docker/cp-test.txt"
E0717 19:36:37.107734  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773 "sudo cat /home/docker/cp-test_ha-420773-m04_ha-420773.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m04:/home/docker/cp-test.txt ha-420773-m02:/home/docker/cp-test_ha-420773-m04_ha-420773-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m02 "sudo cat /home/docker/cp-test_ha-420773-m04_ha-420773-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 cp ha-420773-m04:/home/docker/cp-test.txt ha-420773-m03:/home/docker/cp-test_ha-420773-m04_ha-420773-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 ssh -n ha-420773-m03 "sudo cat /home/docker/cp-test_ha-420773-m04_ha-420773-m03.txt"
E0717 19:36:39.668341  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/CopyFile (18.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 node stop m02 -v=7 --alsologtostderr
E0717 19:36:39.880430  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:36:44.788836  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-420773 node stop m02 -v=7 --alsologtostderr: (12.020960991s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr: exit status 7 (749.152634ms)

                                                
                                                
-- stdout --
	ha-420773
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-420773-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-420773-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-420773-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:36:51.767912  640956 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:36:51.768116  640956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:36:51.768144  640956 out.go:304] Setting ErrFile to fd 2...
	I0717 19:36:51.768166  640956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:36:51.768460  640956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:36:51.768694  640956 out.go:298] Setting JSON to false
	I0717 19:36:51.768754  640956 mustload.go:65] Loading cluster: ha-420773
	I0717 19:36:51.768865  640956 notify.go:220] Checking for updates...
	I0717 19:36:51.769258  640956 config.go:182] Loaded profile config "ha-420773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:36:51.769275  640956 status.go:255] checking status of ha-420773 ...
	I0717 19:36:51.770108  640956 cli_runner.go:164] Run: docker container inspect ha-420773 --format={{.State.Status}}
	I0717 19:36:51.788197  640956 status.go:330] ha-420773 host status = "Running" (err=<nil>)
	I0717 19:36:51.788220  640956 host.go:66] Checking if "ha-420773" exists ...
	I0717 19:36:51.788525  640956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-420773
	I0717 19:36:51.806975  640956 host.go:66] Checking if "ha-420773" exists ...
	I0717 19:36:51.807296  640956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:36:51.807355  640956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-420773
	I0717 19:36:51.841392  640956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/ha-420773/id_rsa Username:docker}
	I0717 19:36:51.941096  640956 ssh_runner.go:195] Run: systemctl --version
	I0717 19:36:51.946182  640956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:36:51.957747  640956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:36:52.016664  640956 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-17 19:36:52.004956099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:36:52.017298  640956 kubeconfig.go:125] found "ha-420773" server: "https://192.168.49.254:8443"
	I0717 19:36:52.017334  640956 api_server.go:166] Checking apiserver status ...
	I0717 19:36:52.017385  640956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.029660  640956 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1468/cgroup
	I0717 19:36:52.039747  640956 api_server.go:182] apiserver freezer: "12:freezer:/docker/e18da2aaad266b7ff2e7db47d216d9223ac2b22e34d40db15f525da80de38cbe/crio/crio-61a23a2861f3c4048a99924861b96b9e19b29d632f3e61973ac5fe8d22acd155"
	I0717 19:36:52.039826  640956 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e18da2aaad266b7ff2e7db47d216d9223ac2b22e34d40db15f525da80de38cbe/crio/crio-61a23a2861f3c4048a99924861b96b9e19b29d632f3e61973ac5fe8d22acd155/freezer.state
	I0717 19:36:52.048961  640956 api_server.go:204] freezer state: "THAWED"
	I0717 19:36:52.048989  640956 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0717 19:36:52.058101  640956 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0717 19:36:52.058131  640956 status.go:422] ha-420773 apiserver status = Running (err=<nil>)
	I0717 19:36:52.058143  640956 status.go:257] ha-420773 status: &{Name:ha-420773 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:36:52.058192  640956 status.go:255] checking status of ha-420773-m02 ...
	I0717 19:36:52.058508  640956 cli_runner.go:164] Run: docker container inspect ha-420773-m02 --format={{.State.Status}}
	I0717 19:36:52.075791  640956 status.go:330] ha-420773-m02 host status = "Stopped" (err=<nil>)
	I0717 19:36:52.075817  640956 status.go:343] host is not running, skipping remaining checks
	I0717 19:36:52.075825  640956 status.go:257] ha-420773-m02 status: &{Name:ha-420773-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:36:52.075892  640956 status.go:255] checking status of ha-420773-m03 ...
	I0717 19:36:52.076203  640956 cli_runner.go:164] Run: docker container inspect ha-420773-m03 --format={{.State.Status}}
	I0717 19:36:52.095069  640956 status.go:330] ha-420773-m03 host status = "Running" (err=<nil>)
	I0717 19:36:52.095095  640956 host.go:66] Checking if "ha-420773-m03" exists ...
	I0717 19:36:52.095525  640956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-420773-m03
	I0717 19:36:52.115096  640956 host.go:66] Checking if "ha-420773-m03" exists ...
	I0717 19:36:52.115540  640956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:36:52.115594  640956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-420773-m03
	I0717 19:36:52.138781  640956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/ha-420773-m03/id_rsa Username:docker}
	I0717 19:36:52.236819  640956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:36:52.250492  640956 kubeconfig.go:125] found "ha-420773" server: "https://192.168.49.254:8443"
	I0717 19:36:52.250520  640956 api_server.go:166] Checking apiserver status ...
	I0717 19:36:52.250573  640956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.264602  640956 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1351/cgroup
	I0717 19:36:52.274398  640956 api_server.go:182] apiserver freezer: "12:freezer:/docker/513b2833b69cbede0501d08e191c3e2c60c194a62f474d1208ca2e1c903869e4/crio/crio-ba5e0011f09f5f70edd3f11af7430ea17c140aa25bb3a4b3391a61bc5e3fe656"
	I0717 19:36:52.274475  640956 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/513b2833b69cbede0501d08e191c3e2c60c194a62f474d1208ca2e1c903869e4/crio/crio-ba5e0011f09f5f70edd3f11af7430ea17c140aa25bb3a4b3391a61bc5e3fe656/freezer.state
	I0717 19:36:52.283337  640956 api_server.go:204] freezer state: "THAWED"
	I0717 19:36:52.283455  640956 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0717 19:36:52.291776  640956 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0717 19:36:52.291814  640956 status.go:422] ha-420773-m03 apiserver status = Running (err=<nil>)
	I0717 19:36:52.291827  640956 status.go:257] ha-420773-m03 status: &{Name:ha-420773-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:36:52.291848  640956 status.go:255] checking status of ha-420773-m04 ...
	I0717 19:36:52.292199  640956 cli_runner.go:164] Run: docker container inspect ha-420773-m04 --format={{.State.Status}}
	I0717 19:36:52.309905  640956 status.go:330] ha-420773-m04 host status = "Running" (err=<nil>)
	I0717 19:36:52.309934  640956 host.go:66] Checking if "ha-420773-m04" exists ...
	I0717 19:36:52.310262  640956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-420773-m04
	I0717 19:36:52.329356  640956 host.go:66] Checking if "ha-420773-m04" exists ...
	I0717 19:36:52.330185  640956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:36:52.330238  640956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-420773-m04
	I0717 19:36:52.346440  640956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/ha-420773-m04/id_rsa Username:docker}
	I0717 19:36:52.440497  640956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:36:52.456642  640956 status.go:257] ha-420773-m04 status: &{Name:ha-420773-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 node start m02 -v=7 --alsologtostderr
E0717 19:36:55.029923  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:37:15.510479  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-420773 node start m02 -v=7 --alsologtostderr: (31.094379476s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr: (1.382681898s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.331684847s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-420773 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-420773 -v=7 --alsologtostderr
E0717 19:37:56.472315  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-420773 -v=7 --alsologtostderr: (36.953277353s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-420773 --wait=true -v=7 --alsologtostderr
E0717 19:39:18.393289  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:41:12.195555  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-420773 --wait=true -v=7 --alsologtostderr: (3m10.021902286s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-420773
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-420773 node delete m03 -v=7 --alsologtostderr: (11.920664186s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 stop -v=7 --alsologtostderr
E0717 19:41:34.543709  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:42:02.234092  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-420773 stop -v=7 --alsologtostderr: (35.706061769s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr: exit status 7 (111.54818ms)

                                                
                                                
-- stdout --
	ha-420773
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-420773-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-420773-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:42:07.358211  655648 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:42:07.358393  655648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:42:07.358403  655648 out.go:304] Setting ErrFile to fd 2...
	I0717 19:42:07.358408  655648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:42:07.358647  655648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:42:07.358832  655648 out.go:298] Setting JSON to false
	I0717 19:42:07.358873  655648 mustload.go:65] Loading cluster: ha-420773
	I0717 19:42:07.358959  655648 notify.go:220] Checking for updates...
	I0717 19:42:07.359982  655648 config.go:182] Loaded profile config "ha-420773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:42:07.360008  655648 status.go:255] checking status of ha-420773 ...
	I0717 19:42:07.360611  655648 cli_runner.go:164] Run: docker container inspect ha-420773 --format={{.State.Status}}
	I0717 19:42:07.378753  655648 status.go:330] ha-420773 host status = "Stopped" (err=<nil>)
	I0717 19:42:07.378780  655648 status.go:343] host is not running, skipping remaining checks
	I0717 19:42:07.378788  655648 status.go:257] ha-420773 status: &{Name:ha-420773 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:42:07.378813  655648 status.go:255] checking status of ha-420773-m02 ...
	I0717 19:42:07.379105  655648 cli_runner.go:164] Run: docker container inspect ha-420773-m02 --format={{.State.Status}}
	I0717 19:42:07.401646  655648 status.go:330] ha-420773-m02 host status = "Stopped" (err=<nil>)
	I0717 19:42:07.401666  655648 status.go:343] host is not running, skipping remaining checks
	I0717 19:42:07.401674  655648 status.go:257] ha-420773-m02 status: &{Name:ha-420773-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:42:07.401696  655648 status.go:255] checking status of ha-420773-m04 ...
	I0717 19:42:07.401995  655648 cli_runner.go:164] Run: docker container inspect ha-420773-m04 --format={{.State.Status}}
	I0717 19:42:07.423004  655648 status.go:330] ha-420773-m04 host status = "Stopped" (err=<nil>)
	I0717 19:42:07.423203  655648 status.go:343] host is not running, skipping remaining checks
	I0717 19:42:07.423210  655648 status.go:257] ha-420773-m04 status: &{Name:ha-420773-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (122.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-420773 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-420773 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.691361862s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (122.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-420773 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-420773 --control-plane -v=7 --alsologtostderr: (1m17.489293938s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-420773 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-211858 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0717 19:46:12.195534  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:46:34.543194  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-211858 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (58.604515514s)
--- PASS: TestJSONOutput/start/Command (58.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-211858 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-211858 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-211858 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-211858 --output=json --user=testUser: (5.886703616s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-263469 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-263469 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.32686ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8d7aa438-14c5-42fb-b4ee-5138a4c32313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-263469] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b89d2e3b-6cb5-4249-aafd-7a5b21b15edd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19283"}}
	{"specversion":"1.0","id":"8dc03daf-0c76-45f5-a379-900ec5b2efea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"99c2be6f-2d25-4aeb-b01e-c35bf80e2f4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig"}}
	{"specversion":"1.0","id":"46356549-519e-4949-9fa2-08efca376dab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube"}}
	{"specversion":"1.0","id":"2041de9c-72ef-40e1-9372-1fdb6116c983","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e92dd4a9-a37a-4ce1-8c42-1d8e09399994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"037e6549-bd5a-4712-a8fb-31b91acdb3c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-263469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-263469
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-731555 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-731555 --network=: (36.844086999s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-731555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-731555
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-731555: (2.088710988s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.96s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-453748 --network=bridge
E0717 19:47:35.241382  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-453748 --network=bridge: (34.260610896s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-453748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-453748
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-453748: (2.02127237s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.30s)

                                                
                                    
x
+
TestKicExistingNetwork (35.61s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-280701 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-280701 --network=existing-network: (33.485694267s)
helpers_test.go:175: Cleaning up "existing-network-280701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-280701
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-280701: (1.975174544s)
--- PASS: TestKicExistingNetwork (35.61s)

                                                
                                    
x
+
TestKicCustomSubnet (34.73s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-551407 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-551407 --subnet=192.168.60.0/24: (32.638910272s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-551407 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-551407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-551407
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-551407: (2.074311996s)
--- PASS: TestKicCustomSubnet (34.73s)

                                                
                                    
x
+
TestKicStaticIP (38.13s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-093139 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-093139 --static-ip=192.168.200.200: (35.857052149s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-093139 ip
helpers_test.go:175: Cleaning up "static-ip-093139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-093139
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-093139: (2.109825685s)
--- PASS: TestKicStaticIP (38.13s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-395638 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-395638 --driver=docker  --container-runtime=crio: (32.236553048s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-398641 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-398641 --driver=docker  --container-runtime=crio: (30.708681662s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-395638
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-398641
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-398641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-398641
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-398641: (1.942973029s)
helpers_test.go:175: Cleaning up "first-395638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-395638
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-395638: (2.451170812s)
--- PASS: TestMinikubeProfile (68.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-495745 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-495745 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.646911174s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-495745 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-508176 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0717 19:51:12.197630  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-508176 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.144842321s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508176 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-495745 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-495745 --alsologtostderr -v=5: (1.620129742s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508176 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-508176
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-508176: (1.205353106s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-508176
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-508176: (6.972855791s)
--- PASS: TestMountStart/serial/RestartStopped (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508176 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548412 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 19:51:34.543473  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 19:52:57.595249  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-548412 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m28.479089116s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-548412 -- rollout status deployment/busybox: (3.102706617s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-8srpq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-qzbl5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-8srpq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-qzbl5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-8srpq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-qzbl5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-8srpq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-8srpq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-qzbl5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-548412 -- exec busybox-fc5497c4f-qzbl5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-548412 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-548412 -v 3 --alsologtostderr: (28.725778866s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-548412 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp testdata/cp-test.txt multinode-548412:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1211306048/001/cp-test_multinode-548412.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412:/home/docker/cp-test.txt multinode-548412-m02:/home/docker/cp-test_multinode-548412_multinode-548412-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m02 "sudo cat /home/docker/cp-test_multinode-548412_multinode-548412-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412:/home/docker/cp-test.txt multinode-548412-m03:/home/docker/cp-test_multinode-548412_multinode-548412-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m03 "sudo cat /home/docker/cp-test_multinode-548412_multinode-548412-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp testdata/cp-test.txt multinode-548412-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1211306048/001/cp-test_multinode-548412-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412-m02:/home/docker/cp-test.txt multinode-548412:/home/docker/cp-test_multinode-548412-m02_multinode-548412.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412 "sudo cat /home/docker/cp-test_multinode-548412-m02_multinode-548412.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412-m02:/home/docker/cp-test.txt multinode-548412-m03:/home/docker/cp-test_multinode-548412-m02_multinode-548412-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m03 "sudo cat /home/docker/cp-test_multinode-548412-m02_multinode-548412-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp testdata/cp-test.txt multinode-548412-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1211306048/001/cp-test_multinode-548412-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412-m03:/home/docker/cp-test.txt multinode-548412:/home/docker/cp-test_multinode-548412-m03_multinode-548412.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412 "sudo cat /home/docker/cp-test_multinode-548412-m03_multinode-548412.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 cp multinode-548412-m03:/home/docker/cp-test.txt multinode-548412-m02:/home/docker/cp-test_multinode-548412-m03_multinode-548412-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 ssh -n multinode-548412-m02 "sudo cat /home/docker/cp-test_multinode-548412-m03_multinode-548412-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-548412 node stop m03: (1.204085197s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-548412 status: exit status 7 (542.481825ms)

                                                
                                                
-- stdout --
	multinode-548412
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548412-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-548412-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-548412 status --alsologtostderr: exit status 7 (522.810312ms)

                                                
                                                
-- stdout --
	multinode-548412
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548412-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-548412-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:53:48.293686  710353 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:53:48.293870  710353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:53:48.293884  710353 out.go:304] Setting ErrFile to fd 2...
	I0717 19:53:48.293891  710353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:53:48.294177  710353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:53:48.294404  710353 out.go:298] Setting JSON to false
	I0717 19:53:48.294470  710353 mustload.go:65] Loading cluster: multinode-548412
	I0717 19:53:48.294568  710353 notify.go:220] Checking for updates...
	I0717 19:53:48.294972  710353 config.go:182] Loaded profile config "multinode-548412": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:53:48.294994  710353 status.go:255] checking status of multinode-548412 ...
	I0717 19:53:48.295585  710353 cli_runner.go:164] Run: docker container inspect multinode-548412 --format={{.State.Status}}
	I0717 19:53:48.316414  710353 status.go:330] multinode-548412 host status = "Running" (err=<nil>)
	I0717 19:53:48.316438  710353 host.go:66] Checking if "multinode-548412" exists ...
	I0717 19:53:48.316752  710353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-548412
	I0717 19:53:48.342608  710353 host.go:66] Checking if "multinode-548412" exists ...
	I0717 19:53:48.342937  710353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:53:48.343009  710353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548412
	I0717 19:53:48.365718  710353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33644 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/multinode-548412/id_rsa Username:docker}
	I0717 19:53:48.460933  710353 ssh_runner.go:195] Run: systemctl --version
	I0717 19:53:48.465387  710353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:53:48.477081  710353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:53:48.535149  710353 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-17 19:53:48.525222152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 19:53:48.535829  710353 kubeconfig.go:125] found "multinode-548412" server: "https://192.168.67.2:8443"
	I0717 19:53:48.535870  710353 api_server.go:166] Checking apiserver status ...
	I0717 19:53:48.535921  710353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:53:48.548147  710353 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup
	I0717 19:53:48.558611  710353 api_server.go:182] apiserver freezer: "12:freezer:/docker/cdc64bca434135f2d9d488dbf5d2642c2df65b5b5b6f4280b790992d5bf350cc/crio/crio-c3d0acef0ab25ae6d1a2de18a8cfb26060e76a3d1ef8112b433cecd6fa5e240c"
	I0717 19:53:48.558685  710353 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cdc64bca434135f2d9d488dbf5d2642c2df65b5b5b6f4280b790992d5bf350cc/crio/crio-c3d0acef0ab25ae6d1a2de18a8cfb26060e76a3d1ef8112b433cecd6fa5e240c/freezer.state
	I0717 19:53:48.568240  710353 api_server.go:204] freezer state: "THAWED"
	I0717 19:53:48.568269  710353 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:53:48.576173  710353 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0717 19:53:48.576206  710353 status.go:422] multinode-548412 apiserver status = Running (err=<nil>)
	I0717 19:53:48.576219  710353 status.go:257] multinode-548412 status: &{Name:multinode-548412 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:53:48.576240  710353 status.go:255] checking status of multinode-548412-m02 ...
	I0717 19:53:48.576573  710353 cli_runner.go:164] Run: docker container inspect multinode-548412-m02 --format={{.State.Status}}
	I0717 19:53:48.594938  710353 status.go:330] multinode-548412-m02 host status = "Running" (err=<nil>)
	I0717 19:53:48.594967  710353 host.go:66] Checking if "multinode-548412-m02" exists ...
	I0717 19:53:48.595268  710353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-548412-m02
	I0717 19:53:48.612108  710353 host.go:66] Checking if "multinode-548412-m02" exists ...
	I0717 19:53:48.612415  710353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:53:48.612474  710353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548412-m02
	I0717 19:53:48.631183  710353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33649 SSHKeyPath:/home/jenkins/minikube-integration/19283-589755/.minikube/machines/multinode-548412-m02/id_rsa Username:docker}
	I0717 19:53:48.724895  710353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:53:48.737415  710353 status.go:257] multinode-548412-m02 status: &{Name:multinode-548412-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:53:48.737456  710353 status.go:255] checking status of multinode-548412-m03 ...
	I0717 19:53:48.737760  710353 cli_runner.go:164] Run: docker container inspect multinode-548412-m03 --format={{.State.Status}}
	I0717 19:53:48.757867  710353 status.go:330] multinode-548412-m03 host status = "Stopped" (err=<nil>)
	I0717 19:53:48.757887  710353 status.go:343] host is not running, skipping remaining checks
	I0717 19:53:48.757896  710353 status.go:257] multinode-548412-m03 status: &{Name:multinode-548412-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-548412 node start m03 -v=7 --alsologtostderr: (9.160344527s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-548412
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-548412
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-548412: (24.992382794s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548412 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-548412 --wait=true -v=8 --alsologtostderr: (1m2.394448926s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-548412
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-548412 node delete m03: (4.694936352s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-548412 stop: (23.663793067s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-548412 status: exit status 7 (95.464033ms)

                                                
                                                
-- stdout --
	multinode-548412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-548412-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-548412 status --alsologtostderr: exit status 7 (95.781438ms)

                                                
                                                
-- stdout --
	multinode-548412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-548412-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:55:55.401287  717823 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:55:55.401499  717823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:55:55.401530  717823 out.go:304] Setting ErrFile to fd 2...
	I0717 19:55:55.401551  717823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:55:55.401828  717823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 19:55:55.402039  717823 out.go:298] Setting JSON to false
	I0717 19:55:55.402107  717823 mustload.go:65] Loading cluster: multinode-548412
	I0717 19:55:55.402181  717823 notify.go:220] Checking for updates...
	I0717 19:55:55.403153  717823 config.go:182] Loaded profile config "multinode-548412": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:55:55.403201  717823 status.go:255] checking status of multinode-548412 ...
	I0717 19:55:55.403739  717823 cli_runner.go:164] Run: docker container inspect multinode-548412 --format={{.State.Status}}
	I0717 19:55:55.421260  717823 status.go:330] multinode-548412 host status = "Stopped" (err=<nil>)
	I0717 19:55:55.421281  717823 status.go:343] host is not running, skipping remaining checks
	I0717 19:55:55.421289  717823 status.go:257] multinode-548412 status: &{Name:multinode-548412 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:55:55.421329  717823 status.go:255] checking status of multinode-548412-m02 ...
	I0717 19:55:55.421648  717823 cli_runner.go:164] Run: docker container inspect multinode-548412-m02 --format={{.State.Status}}
	I0717 19:55:55.452562  717823 status.go:330] multinode-548412-m02 host status = "Stopped" (err=<nil>)
	I0717 19:55:55.452585  717823 status.go:343] host is not running, skipping remaining checks
	I0717 19:55:55.452593  717823 status.go:257] multinode-548412-m02 status: &{Name:multinode-548412-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548412 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 19:56:12.196384  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 19:56:34.543224  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-548412 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (55.692945201s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-548412 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-548412
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548412-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-548412-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.232539ms)

                                                
                                                
-- stdout --
	* [multinode-548412-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-548412-m02' is duplicated with machine name 'multinode-548412-m02' in profile 'multinode-548412'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-548412-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-548412-m03 --driver=docker  --container-runtime=crio: (34.41855645s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-548412
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-548412: exit status 80 (315.469466ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-548412 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-548412-m03 already exists in multinode-548412-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-548412-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-548412-m03: (2.016934468s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.88s)

                                                
                                    
x
+
TestPreload (131.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-490587 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-490587 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m40.591766934s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-490587 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-490587 image pull gcr.io/k8s-minikube/busybox: (1.780111546s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-490587
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-490587: (5.822635151s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-490587 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-490587 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.556804752s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-490587 image list
helpers_test.go:175: Cleaning up "test-preload-490587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-490587
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-490587: (2.531617205s)
--- PASS: TestPreload (131.57s)

                                                
                                    
x
+
TestScheduledStopUnix (110.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-883310 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-883310 --memory=2048 --driver=docker  --container-runtime=crio: (33.721950369s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-883310 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-883310 -n scheduled-stop-883310
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-883310 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-883310 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-883310 -n scheduled-stop-883310
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-883310
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-883310 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 20:01:12.195539  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-883310
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-883310: exit status 7 (65.308057ms)

                                                
                                                
-- stdout --
	scheduled-stop-883310
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-883310 -n scheduled-stop-883310
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-883310 -n scheduled-stop-883310: exit status 7 (66.41161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-883310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-883310
E0717 20:01:34.543501  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-883310: (5.000885006s)
--- PASS: TestScheduledStopUnix (110.25s)

                                                
                                    
x
+
TestInsufficientStorage (11.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-508938 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-508938 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.463183156s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6fb92212-43bd-4b0b-b6fc-43da0ecbe0a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-508938] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3edf7a65-889d-476d-a2b3-25c99f8a1287","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19283"}}
	{"specversion":"1.0","id":"7b5172fd-1ccd-4264-8657-f46fb5bf7563","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a3f32b25-d213-477c-b4f8-b8114930f938","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig"}}
	{"specversion":"1.0","id":"965f8af7-264d-4644-a5b7-d093ab9eed4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube"}}
	{"specversion":"1.0","id":"740c98d8-f00a-469a-b6f1-b4113fce4120","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0a68f222-4a1e-4e0d-8588-ad96a4cefde7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e852eb4-bd5e-4e1b-b27c-a645fe115fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bba762ca-0015-474c-9867-d84fa7f076aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"894acda6-c813-4191-89d3-9d97c286c585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3473220c-00b1-4a26-99ba-4c3132bc628f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e1cffc4c-398c-4c5d-bb15-d49f3df91c9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-508938\" primary control-plane node in \"insufficient-storage-508938\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d492f88-0dda-4c79-b7ae-4f2300a283b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721146479-19264 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"92e1f9cd-5f20-4b54-b589-964b50b36217","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"16926fac-43fd-4bbe-81f8-55fec4657912","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-508938 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-508938 --output=json --layout=cluster: exit status 7 (281.158837ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-508938","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-508938","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 20:01:43.320637  735855 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-508938" does not appear in /home/jenkins/minikube-integration/19283-589755/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-508938 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-508938 --output=json --layout=cluster: exit status 7 (299.483226ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-508938","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-508938","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 20:01:43.619712  735919 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-508938" does not appear in /home/jenkins/minikube-integration/19283-589755/kubeconfig
	E0717 20:01:43.630342  735919 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/insufficient-storage-508938/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-508938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-508938
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-508938: (2.062797153s)
--- PASS: TestInsufficientStorage (11.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3824940875 start -p running-upgrade-513376 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3824940875 start -p running-upgrade-513376 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.720348238s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-513376 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0717 20:06:12.196310  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 20:06:34.543763  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-513376 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.465676628s)
helpers_test.go:175: Cleaning up "running-upgrade-513376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-513376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-513376: (2.571568695s)
--- PASS: TestRunningBinaryUpgrade (74.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (394.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-089427 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-089427 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m16.011393207s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-089427
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-089427: (1.303884311s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-089427 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-089427 status --format={{.Host}}: exit status 7 (101.449739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-089427 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-089427 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.752250653s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-089427 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-089427 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-089427 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (115.500602ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-089427] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-089427
	    minikube start -p kubernetes-upgrade-089427 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0894272 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-089427 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-089427 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-089427 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.059024294s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-089427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-089427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-089427: (2.511424541s)
--- PASS: TestKubernetesUpgrade (394.98s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.97s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2735231750 start -p missing-upgrade-024107 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2735231750 start -p missing-upgrade-024107 --memory=2200 --driver=docker  --container-runtime=crio: (1m13.435758775s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-024107
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-024107: (10.819835116s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-024107
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-024107 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-024107 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.263046829s)
helpers_test.go:175: Cleaning up "missing-upgrade-024107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-024107
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-024107: (2.208869368s)
--- PASS: TestMissingContainerUpgrade (144.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-410640 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-410640 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (85.869049ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-410640] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-410640 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-410640 --driver=docker  --container-runtime=crio: (40.386498845s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-410640 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-410640 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-410640 --no-kubernetes --driver=docker  --container-runtime=crio: (17.772615793s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-410640 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-410640 status -o json: exit status 2 (330.254159ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-410640","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-410640
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-410640: (2.262724851s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-410640 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-410640 --no-kubernetes --driver=docker  --container-runtime=crio: (9.873485578s)
--- PASS: TestNoKubernetes/serial/Start (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-410640 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-410640 "sudo systemctl is-active --quiet service kubelet": exit status 1 (402.896372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-410640
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-410640: (1.306171641s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-410640 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-410640 --driver=docker  --container-runtime=crio: (7.320627412s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-410640 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-410640 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.845008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3399555471 start -p stopped-upgrade-977216 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0717 20:04:15.242233  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3399555471 start -p stopped-upgrade-977216 --memory=2200 --vm-driver=docker  --container-runtime=crio: (47.367242867s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3399555471 -p stopped-upgrade-977216 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3399555471 -p stopped-upgrade-977216 stop: (2.09723875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-977216 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-977216 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.354111593s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-977216
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-977216: (1.043745458s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestPause/serial/Start (64.35s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-023767 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-023767 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m4.352694128s)
--- PASS: TestPause/serial/Start (64.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-023767 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-023767 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.35121328s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-023767 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-023767 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-023767 --output=json --layout=cluster: exit status 2 (361.464672ms)

                                                
                                                
-- stdout --
	{"Name":"pause-023767","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-023767","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-023767 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-023767 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-023767 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-023767 --alsologtostderr -v=5: (2.843016409s)
--- PASS: TestPause/serial/DeletePaused (2.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-023767
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-023767: exit status 1 (16.612301ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-023767: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-386416 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-386416 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (309.351452ms)

                                                
                                                
-- stdout --
	* [false-386416] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 20:09:11.030538  774722 out.go:291] Setting OutFile to fd 1 ...
	I0717 20:09:11.035844  774722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 20:09:11.035860  774722 out.go:304] Setting ErrFile to fd 2...
	I0717 20:09:11.035867  774722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 20:09:11.036166  774722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-589755/.minikube/bin
	I0717 20:09:11.036623  774722 out.go:298] Setting JSON to false
	I0717 20:09:11.037683  774722 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13894,"bootTime":1721233057,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0717 20:09:11.037760  774722 start.go:139] virtualization:  
	I0717 20:09:11.043107  774722 out.go:177] * [false-386416] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0717 20:09:11.046217  774722 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 20:09:11.046513  774722 notify.go:220] Checking for updates...
	I0717 20:09:11.050787  774722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:09:11.053010  774722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-589755/kubeconfig
	I0717 20:09:11.055102  774722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-589755/.minikube
	I0717 20:09:11.057378  774722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:09:11.059563  774722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:09:11.062240  774722 config.go:182] Loaded profile config "kubernetes-upgrade-089427": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 20:09:11.062467  774722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 20:09:11.099577  774722 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 20:09:11.099730  774722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:09:11.221335  774722 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-17 20:09:11.208915051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0717 20:09:11.221508  774722 docker.go:307] overlay module found
	I0717 20:09:11.224582  774722 out.go:177] * Using the docker driver based on user configuration
	I0717 20:09:11.226489  774722 start.go:297] selected driver: docker
	I0717 20:09:11.226514  774722 start.go:901] validating driver "docker" against <nil>
	I0717 20:09:11.226529  774722 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:09:11.229026  774722 out.go:177] 
	W0717 20:09:11.230695  774722 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 20:09:11.232586  774722 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-386416 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-386416" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 20:09:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-089427
contexts:
- context:
cluster: kubernetes-upgrade-089427
extensions:
- extension:
last-update: Wed, 17 Jul 2024 20:09:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-089427
name: kubernetes-upgrade-089427
current-context: kubernetes-upgrade-089427
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-089427
user:
client-certificate: /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kubernetes-upgrade-089427/client.crt
client-key: /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kubernetes-upgrade-089427/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-386416

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386416"

                                                
                                                
----------------------- debugLogs end: false-386416 [took: 4.565776783s] --------------------------------
helpers_test.go:175: Cleaning up "false-386416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-386416
--- PASS: TestNetworkPlugins/group/false (5.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (179.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-784457 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0717 20:11:12.196459  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 20:11:34.543190  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-784457 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m59.683373674s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (179.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-784457 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [82e86489-e1ea-450d-b6ff-2882406498a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [82e86489-e1ea-450d-b6ff-2882406498a9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003615836s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-784457 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-361470 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-361470 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (1m2.448405572s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-784457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-784457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.266309407s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-784457 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-784457 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-784457 --alsologtostderr -v=3: (12.172325437s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-784457 -n old-k8s-version-784457
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-784457 -n old-k8s-version-784457: exit status 7 (89.233198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-784457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (152.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-784457 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-784457 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m31.872701601s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-784457 -n old-k8s-version-784457
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (152.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-361470 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [400c392d-ed66-4c50-8cb3-0cd4f1ad02f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [400c392d-ed66-4c50-8cb3-0cd4f1ad02f5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.008298815s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-361470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-361470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-361470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.192166111s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-361470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-361470 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-361470 --alsologtostderr -v=3: (12.043230912s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470: exit status 7 (75.494051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-361470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-361470 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 20:16:12.195621  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 20:16:34.543746  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-361470 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (4m27.575892943s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-74kzs" [95836cb7-285f-4020-839e-0e5809d6c97e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005003629s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-74kzs" [95836cb7-285f-4020-839e-0e5809d6c97e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00381098s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-784457 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-784457 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-784457 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-784457 -n old-k8s-version-784457
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-784457 -n old-k8s-version-784457: exit status 2 (315.986522ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-784457 -n old-k8s-version-784457
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-784457 -n old-k8s-version-784457: exit status 2 (351.227354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-784457 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-784457 -n old-k8s-version-784457
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-784457 -n old-k8s-version-784457
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-704061 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-704061 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (1m0.707355275s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-704061 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c7b81614-0e7d-40b8-899e-3b43594eea88] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c7b81614-0e7d-40b8-899e-3b43594eea88] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.002977452s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-704061 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-704061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-704061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024091673s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-704061 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-704061 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-704061 --alsologtostderr -v=3: (11.955378141s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-704061 -n embed-certs-704061
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-704061 -n embed-certs-704061: exit status 7 (68.244902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-704061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (279.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-704061 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 20:18:39.892162  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:39.897495  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:39.907747  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:39.927995  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:39.968276  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:40.048574  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:40.208968  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:40.529521  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:41.169955  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:42.450946  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:45.011766  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:18:50.131986  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:19:00.372949  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:19:20.853162  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-704061 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (4m38.873369703s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-704061 -n embed-certs-704061
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (279.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hnzkd" [424eaa20-88bb-407e-b36f-818e319070b5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003434966s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hnzkd" [424eaa20-88bb-407e-b36f-818e319070b5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003529357s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-361470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-361470 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-361470 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470: exit status 2 (324.526433ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470: exit status 2 (341.554176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-361470 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-361470 -n default-k8s-diff-port-361470
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-539101 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 20:20:01.815490  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
E0717 20:20:55.243041  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-539101 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m5.427842152s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-539101 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a3eaf07-6524-4818-b332-7c6e9fe19e94] Pending
helpers_test.go:344: "busybox" [4a3eaf07-6524-4818-b332-7c6e9fe19e94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4a3eaf07-6524-4818-b332-7c6e9fe19e94] Running
E0717 20:21:12.196238  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003622975s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-539101 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-539101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-539101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.060285788s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-539101 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-539101 --alsologtostderr -v=3
E0717 20:21:23.735725  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-539101 --alsologtostderr -v=3: (11.953452966s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-539101 -n no-preload-539101
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-539101 -n no-preload-539101: exit status 7 (65.674765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-539101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (302.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-539101 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 20:21:34.543826  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-539101 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (5m2.192667907s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-539101 -n no-preload-539101
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (302.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wnzm2" [9c3cdd52-5011-4498-8760-35475b8eb3c1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004264726s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wnzm2" [9c3cdd52-5011-4498-8760-35475b8eb3c1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0044402s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-704061 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-704061 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-704061 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-704061 -n embed-certs-704061
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-704061 -n embed-certs-704061: exit status 2 (321.229086ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-704061 -n embed-certs-704061
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-704061 -n embed-certs-704061: exit status 2 (309.424738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-704061 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-704061 -n embed-certs-704061
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-704061 -n embed-certs-704061
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-282994 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 20:23:39.892287  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-282994 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (40.58225423s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-282994 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-282994 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.42179382s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-282994 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-282994 --alsologtostderr -v=3: (1.366482674s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-282994 -n newest-cni-282994
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-282994 -n newest-cni-282994: exit status 7 (70.996399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-282994 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-282994 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 20:24:07.576558  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/old-k8s-version-784457/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-282994 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (15.427638934s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-282994 -n newest-cni-282994
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-282994 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-282994 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-282994 -n newest-cni-282994
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-282994 -n newest-cni-282994: exit status 2 (332.480569ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-282994 -n newest-cni-282994
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-282994 -n newest-cni-282994: exit status 2 (335.837326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-282994 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-282994 -n newest-cni-282994
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-282994 -n newest-cni-282994
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0717 20:24:51.393639  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:51.398932  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:51.409216  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:51.429496  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:51.469767  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:51.550061  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:51.710312  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:52.030679  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:52.671240  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:53.951492  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:24:56.512608  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:25:01.633534  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:25:11.874600  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (59.553974119s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-386416 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-386416 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4nbcg" [7a5b082a-746e-443d-9881-8cad922732d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4nbcg" [7a5b082a-746e-443d-9881-8cad922732d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003709763s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-386416 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0717 20:26:12.195811  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/addons-747597/client.crt: no such file or directory
E0717 20:26:13.315482  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:26:17.597314  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (59.83417666s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-b7j5x" [0b8abaca-be22-4116-b3ff-dd3f6dab7048] Running
E0717 20:26:34.544034  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003390132s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-b7j5x" [0b8abaca-be22-4116-b3ff-dd3f6dab7048] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004829876s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-539101 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-539101 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-539101 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-539101 -n no-preload-539101
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-539101 -n no-preload-539101: exit status 2 (314.022366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-539101 -n no-preload-539101
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-539101 -n no-preload-539101: exit status 2 (323.210266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-539101 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-539101 -n no-preload-539101
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-539101 -n no-preload-539101
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)
E0717 20:31:25.861632  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/no-preload-539101/client.crt: no such file or directory
E0717 20:31:34.543855  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/functional-815404/client.crt: no such file or directory
E0717 20:31:40.775815  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:31:46.341853  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/no-preload-539101/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m17.653993422s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2x57n" [243549a5-17c9-44ab-89bc-1f381d057180] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005960001s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-386416 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-386416 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m2dhf" [92c7e20c-2da1-4633-b86e-e54f94230846] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m2dhf" [92c7e20c-2da1-4633-b86e-e54f94230846] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005031299s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-386416 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0717 20:27:35.236250  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m13.01219024s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w4n9s" [127785d8-a848-4569-a8b9-a3c5542e617f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004770539s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-386416 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-386416 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-c8p5c" [a7400257-8a2f-40b7-82a1-d0ac2f97f4a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-c8p5c" [a7400257-8a2f-40b7-82a1-d0ac2f97f4a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003895692s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-386416 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-386416 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-386416 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fdrjd" [a4fa511a-7704-45fd-ad2b-046b621df233] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fdrjd" [a4fa511a-7704-45fd-ad2b-046b621df233] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005125508s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m25.522294978s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-386416 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0717 20:29:51.393682  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m8.727025729s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-386416 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-386416 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-48w27" [46b88f98-aecc-45e2-b219-fd06c2e5cc97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 20:30:18.851847  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:18.857064  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:18.867327  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:18.887557  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:18.927785  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:19.008610  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:19.076832  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/default-k8s-diff-port-361470/client.crt: no such file or directory
E0717 20:30:19.169043  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:19.489643  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:20.130470  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
E0717 20:30:21.410797  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-48w27" [46b88f98-aecc-45e2-b219-fd06c2e5cc97] Running
E0717 20:30:23.971999  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004130527s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-386416 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qvgtg" [7400d41c-5378-421d-836b-a547b00e82d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006858681s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-386416 "pgrep -a kubelet"
E0717 20:30:39.334085  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/auto-386416/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-386416 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-66mtn" [15371505-c91a-434c-8c12-ec55d66f1ee5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-66mtn" [15371505-c91a-434c-8c12-ec55d66f1ee5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004185211s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-386416 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (59.485855849s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-386416 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-386416 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-386416 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cpb7s" [4627c3aa-9ffe-466e-b9b9-5e0884522a32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 20:31:50.422827  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:50.428067  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:50.438393  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:50.458815  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:50.499067  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:50.579323  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:50.739523  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:51.059699  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:51.700867  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
E0717 20:31:52.981734  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-cpb7s" [4627c3aa-9ffe-466e-b9b9-5e0884522a32] Running
E0717 20:31:55.541939  595147 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kindnet-386416/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003895188s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-386416 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-386416 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (33/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-114745 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-114745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-114745
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-066238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-066238
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-386416 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-386416" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 20:09:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-089427
contexts:
- context:
cluster: kubernetes-upgrade-089427
extensions:
- extension:
last-update: Wed, 17 Jul 2024 20:09:04 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-089427
name: kubernetes-upgrade-089427
current-context: kubernetes-upgrade-089427
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-089427
user:
client-certificate: /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kubernetes-upgrade-089427/client.crt
client-key: /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kubernetes-upgrade-089427/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-386416

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386416"

                                                
                                                
----------------------- debugLogs end: kubenet-386416 [took: 4.923926306s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-386416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-386416
--- SKIP: TestNetworkPlugins/group/kubenet (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-386416 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-386416" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19283-589755/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 20:09:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-089427
contexts:
- context:
cluster: kubernetes-upgrade-089427
extensions:
- extension:
last-update: Wed, 17 Jul 2024 20:09:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-089427
name: kubernetes-upgrade-089427
current-context: kubernetes-upgrade-089427
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-089427
user:
client-certificate: /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kubernetes-upgrade-089427/client.crt
client-key: /home/jenkins/minikube-integration/19283-589755/.minikube/profiles/kubernetes-upgrade-089427/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-386416

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-386416" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386416"

                                                
                                                
----------------------- debugLogs end: cilium-386416 [took: 3.806417268s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-386416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-386416
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
Copied to clipboard