Test Report: Docker_Linux_crio_arm64 19478

                    
                      cdbac7a92b6ef0941d2ffc9877dc4d64cf2ec5e1:2024-08-19:35858
                    
                

Test fail (2/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 153.57
36 TestAddons/parallel/MetricsServer 351.19
x
+
TestAddons/parallel/Ingress (153.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-778133 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-778133 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-778133 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b54c6515-9b9d-4274-a80f-666d2cd11914] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b54c6515-9b9d-4274-a80f-666d2cd11914] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00444029s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-778133 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.375700068s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-778133 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-778133 addons disable ingress-dns --alsologtostderr -v=1: (1.705229211s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-778133 addons disable ingress --alsologtostderr -v=1: (7.743594969s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-778133
helpers_test.go:235: (dbg) docker inspect addons-778133:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44",
	        "Created": "2024-08-19T17:52:50.954183116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 436108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T17:52:51.084486799Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1082065554095668b21dfc58cfca3febbc96bb8424fcaec6e38d6ee040df84c8",
	        "ResolvConfPath": "/var/lib/docker/containers/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44/hostname",
	        "HostsPath": "/var/lib/docker/containers/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44/hosts",
	        "LogPath": "/var/lib/docker/containers/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44-json.log",
	        "Name": "/addons-778133",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-778133:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-778133",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/94ea42193065398cd4079dfea372f0e98dd209023968d26efb88ebd211723e1c-init/diff:/var/lib/docker/overlay2/18c6643ae063556b6e8c1e5b89d206551c41c973a0328ed325f1a299d228eb84/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94ea42193065398cd4079dfea372f0e98dd209023968d26efb88ebd211723e1c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94ea42193065398cd4079dfea372f0e98dd209023968d26efb88ebd211723e1c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94ea42193065398cd4079dfea372f0e98dd209023968d26efb88ebd211723e1c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-778133",
	                "Source": "/var/lib/docker/volumes/addons-778133/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-778133",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-778133",
	                "name.minikube.sigs.k8s.io": "addons-778133",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58ad6a33cf42256571749241bb2bb8dd1b1a4c6ece618561dda5752029711b53",
	            "SandboxKey": "/var/run/docker/netns/58ad6a33cf42",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-778133": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ddf82f9e1e4cfa011e39367f54d35ae59db28a37a95c5531afcbd77f13f87fc1",
	                    "EndpointID": "db6c8f345048c659fe266ad293bbb9be9b7bfaf3319506b539d009b2f4f76d1f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-778133",
	                        "04d2b6f0984a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-778133 -n addons-778133
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-778133 logs -n 25: (1.321790356s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-198345                                                                     | download-only-198345   | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:52 UTC |
	| start   | --download-only -p                                                                          | download-docker-552596 | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | download-docker-552596                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-552596                                                                   | download-docker-552596 | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-383479   | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | binary-mirror-383479                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45625                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-383479                                                                     | binary-mirror-383479   | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:52 UTC |
	| addons  | enable dashboard -p                                                                         | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | addons-778133                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | addons-778133                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-778133 --wait=true                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:55 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:55 UTC | 19 Aug 24 17:55 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-778133 ip                                                                            | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | -p addons-778133                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-778133 ssh cat                                                                       | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | /opt/local-path-provisioner/pvc-de919d21-52a1-44ba-882f-4f4cb571fe76_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:57 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-778133 addons                                                                        | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-778133 addons                                                                        | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | addons-778133                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | -p addons-778133                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | addons-778133                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-778133 ssh curl -s                                                                   | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-778133 ip                                                                            | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 17:59 UTC |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 17:59 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 18:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:52:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:52:27.122467  435600 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:52:27.124658  435600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:52:27.124677  435600 out.go:358] Setting ErrFile to fd 2...
	I0819 17:52:27.124683  435600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:52:27.124963  435600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 17:52:27.125449  435600 out.go:352] Setting JSON to false
	I0819 17:52:27.126308  435600 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5694,"bootTime":1724084253,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 17:52:27.126386  435600 start.go:139] virtualization:  
	I0819 17:52:27.129034  435600 out.go:177] * [addons-778133] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 17:52:27.132200  435600 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:52:27.132372  435600 notify.go:220] Checking for updates...
	I0819 17:52:27.135563  435600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:52:27.138289  435600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 17:52:27.140302  435600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	I0819 17:52:27.142345  435600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 17:52:27.144334  435600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:52:27.146356  435600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:52:27.169447  435600 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:52:27.169574  435600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:52:27.229408  435600 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:52:27.219992119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:52:27.229519  435600 docker.go:307] overlay module found
	I0819 17:52:27.231379  435600 out.go:177] * Using the docker driver based on user configuration
	I0819 17:52:27.232634  435600 start.go:297] selected driver: docker
	I0819 17:52:27.232650  435600 start.go:901] validating driver "docker" against <nil>
	I0819 17:52:27.232665  435600 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:52:27.233312  435600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:52:27.285196  435600 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:52:27.275652498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:52:27.285404  435600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:52:27.285635  435600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:52:27.287582  435600 out.go:177] * Using Docker driver with root privileges
	I0819 17:52:27.289719  435600 cni.go:84] Creating CNI manager for ""
	I0819 17:52:27.289744  435600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:52:27.289755  435600 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:52:27.289842  435600 start.go:340] cluster config:
	{Name:addons-778133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:52:27.291300  435600 out.go:177] * Starting "addons-778133" primary control-plane node in "addons-778133" cluster
	I0819 17:52:27.293694  435600 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 17:52:27.295003  435600 out.go:177] * Pulling base image v0.0.44-1724062045-19478 ...
	I0819 17:52:27.297374  435600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:52:27.297435  435600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0819 17:52:27.297458  435600 cache.go:56] Caching tarball of preloaded images
	I0819 17:52:27.297462  435600 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local docker daemon
	I0819 17:52:27.297540  435600 preload.go:172] Found /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0819 17:52:27.297550  435600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:52:27.297911  435600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/config.json ...
	I0819 17:52:27.297942  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/config.json: {Name:mk5de3d37436266e25961fb00c0c5a84a91bf9ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:27.313071  435600 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:52:27.313195  435600 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory
	I0819 17:52:27.313221  435600 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory, skipping pull
	I0819 17:52:27.313232  435600 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b exists in cache, skipping pull
	I0819 17:52:27.313241  435600 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b as a tarball
	I0819 17:52:27.313251  435600 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b from local cache
	I0819 17:52:43.944311  435600 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b from cached tarball
	I0819 17:52:43.944351  435600 cache.go:194] Successfully downloaded all kic artifacts
	I0819 17:52:43.944395  435600 start.go:360] acquireMachinesLock for addons-778133: {Name:mk95a2ebd9f8fd65d585e6bdd4fe86a3f12663b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:52:43.944894  435600 start.go:364] duration metric: took 474.221µs to acquireMachinesLock for "addons-778133"
	I0819 17:52:43.944931  435600 start.go:93] Provisioning new machine with config: &{Name:addons-778133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:52:43.945028  435600 start.go:125] createHost starting for "" (driver="docker")
	I0819 17:52:43.946461  435600 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 17:52:43.946683  435600 start.go:159] libmachine.API.Create for "addons-778133" (driver="docker")
	I0819 17:52:43.946714  435600 client.go:168] LocalClient.Create starting
	I0819 17:52:43.946799  435600 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem
	I0819 17:52:44.287860  435600 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/cert.pem
	I0819 17:52:44.740254  435600 cli_runner.go:164] Run: docker network inspect addons-778133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 17:52:44.755335  435600 cli_runner.go:211] docker network inspect addons-778133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 17:52:44.755429  435600 network_create.go:284] running [docker network inspect addons-778133] to gather additional debugging logs...
	I0819 17:52:44.755449  435600 cli_runner.go:164] Run: docker network inspect addons-778133
	W0819 17:52:44.768582  435600 cli_runner.go:211] docker network inspect addons-778133 returned with exit code 1
	I0819 17:52:44.768613  435600 network_create.go:287] error running [docker network inspect addons-778133]: docker network inspect addons-778133: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-778133 not found
	I0819 17:52:44.768626  435600 network_create.go:289] output of [docker network inspect addons-778133]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-778133 not found
	
	** /stderr **
	I0819 17:52:44.768729  435600 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:52:44.784048  435600 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a51e70}
	I0819 17:52:44.784084  435600 network_create.go:124] attempt to create docker network addons-778133 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 17:52:44.784153  435600 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-778133 addons-778133
	I0819 17:52:44.848141  435600 network_create.go:108] docker network addons-778133 192.168.49.0/24 created
	I0819 17:52:44.848173  435600 kic.go:121] calculated static IP "192.168.49.2" for the "addons-778133" container
	I0819 17:52:44.848292  435600 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 17:52:44.862848  435600 cli_runner.go:164] Run: docker volume create addons-778133 --label name.minikube.sigs.k8s.io=addons-778133 --label created_by.minikube.sigs.k8s.io=true
	I0819 17:52:44.879737  435600 oci.go:103] Successfully created a docker volume addons-778133
	I0819 17:52:44.879833  435600 cli_runner.go:164] Run: docker run --rm --name addons-778133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-778133 --entrypoint /usr/bin/test -v addons-778133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -d /var/lib
	I0819 17:52:46.855852  435600 cli_runner.go:217] Completed: docker run --rm --name addons-778133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-778133 --entrypoint /usr/bin/test -v addons-778133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -d /var/lib: (1.975983619s)
	I0819 17:52:46.855884  435600 oci.go:107] Successfully prepared a docker volume addons-778133
	I0819 17:52:46.855926  435600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:52:46.855958  435600 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 17:52:46.856036  435600 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-778133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 17:52:50.888577  435600 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-778133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -I lz4 -xf /preloaded.tar -C /extractDir: (4.032490312s)
	I0819 17:52:50.888615  435600 kic.go:203] duration metric: took 4.03265359s to extract preloaded images to volume ...
	W0819 17:52:50.888756  435600 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 17:52:50.888871  435600 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 17:52:50.940133  435600 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-778133 --name addons-778133 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-778133 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-778133 --network addons-778133 --ip 192.168.49.2 --volume addons-778133:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b
	I0819 17:52:51.242166  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Running}}
	I0819 17:52:51.262869  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:52:51.286367  435600 cli_runner.go:164] Run: docker exec addons-778133 stat /var/lib/dpkg/alternatives/iptables
	I0819 17:52:51.352160  435600 oci.go:144] the created container "addons-778133" has a running status.
	I0819 17:52:51.352192  435600 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa...
	I0819 17:52:52.310936  435600 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 17:52:52.336037  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:52:52.356452  435600 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 17:52:52.356474  435600 kic_runner.go:114] Args: [docker exec --privileged addons-778133 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 17:52:52.415476  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:52:52.431680  435600 machine.go:93] provisionDockerMachine start ...
	I0819 17:52:52.431774  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:52.448071  435600 main.go:141] libmachine: Using SSH client type: native
	I0819 17:52:52.448429  435600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33166 <nil> <nil>}
	I0819 17:52:52.448448  435600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:52:52.579462  435600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-778133
	
	I0819 17:52:52.579487  435600 ubuntu.go:169] provisioning hostname "addons-778133"
	I0819 17:52:52.579552  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:52.594983  435600 main.go:141] libmachine: Using SSH client type: native
	I0819 17:52:52.595227  435600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33166 <nil> <nil>}
	I0819 17:52:52.595239  435600 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-778133 && echo "addons-778133" | sudo tee /etc/hostname
	I0819 17:52:52.741356  435600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-778133
	
	I0819 17:52:52.741443  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:52.759487  435600 main.go:141] libmachine: Using SSH client type: native
	I0819 17:52:52.759757  435600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33166 <nil> <nil>}
	I0819 17:52:52.759782  435600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-778133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-778133/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-778133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:52:52.892460  435600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:52:52.892488  435600 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19478-429440/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-429440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-429440/.minikube}
	I0819 17:52:52.892516  435600 ubuntu.go:177] setting up certificates
	I0819 17:52:52.892527  435600 provision.go:84] configureAuth start
	I0819 17:52:52.892615  435600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-778133
	I0819 17:52:52.909464  435600 provision.go:143] copyHostCerts
	I0819 17:52:52.909566  435600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-429440/.minikube/ca.pem (1082 bytes)
	I0819 17:52:52.909719  435600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-429440/.minikube/cert.pem (1123 bytes)
	I0819 17:52:52.909828  435600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-429440/.minikube/key.pem (1679 bytes)
	I0819 17:52:52.909911  435600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-429440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca-key.pem org=jenkins.addons-778133 san=[127.0.0.1 192.168.49.2 addons-778133 localhost minikube]
	I0819 17:52:53.125810  435600 provision.go:177] copyRemoteCerts
	I0819 17:52:53.125893  435600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:52:53.125965  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.143545  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:53.237582  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:52:53.261074  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 17:52:53.284096  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:52:53.306941  435600 provision.go:87] duration metric: took 414.397023ms to configureAuth
	I0819 17:52:53.306968  435600 ubuntu.go:193] setting minikube options for container-runtime
	I0819 17:52:53.307167  435600 config.go:182] Loaded profile config "addons-778133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:52:53.307291  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.323680  435600 main.go:141] libmachine: Using SSH client type: native
	I0819 17:52:53.323918  435600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33166 <nil> <nil>}
	I0819 17:52:53.323939  435600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:52:53.568532  435600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:52:53.568620  435600 machine.go:96] duration metric: took 1.136918403s to provisionDockerMachine
	I0819 17:52:53.568645  435600 client.go:171] duration metric: took 9.621923787s to LocalClient.Create
	I0819 17:52:53.568696  435600 start.go:167] duration metric: took 9.622011472s to libmachine.API.Create "addons-778133"
	I0819 17:52:53.568728  435600 start.go:293] postStartSetup for "addons-778133" (driver="docker")
	I0819 17:52:53.568754  435600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:52:53.568842  435600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:52:53.568901  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.590467  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:53.687033  435600 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:52:53.690280  435600 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 17:52:53.690317  435600 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 17:52:53.690329  435600 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 17:52:53.690335  435600 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 17:52:53.690345  435600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-429440/.minikube/addons for local assets ...
	I0819 17:52:53.690407  435600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-429440/.minikube/files for local assets ...
	I0819 17:52:53.690434  435600 start.go:296] duration metric: took 121.685833ms for postStartSetup
	I0819 17:52:53.690744  435600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-778133
	I0819 17:52:53.707347  435600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/config.json ...
	I0819 17:52:53.707634  435600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:52:53.707685  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.723681  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:53.812652  435600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 17:52:53.816656  435600 start.go:128] duration metric: took 9.871612399s to createHost
	I0819 17:52:53.816685  435600 start.go:83] releasing machines lock for "addons-778133", held for 9.871772789s
	I0819 17:52:53.816752  435600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-778133
	I0819 17:52:53.835069  435600 ssh_runner.go:195] Run: cat /version.json
	I0819 17:52:53.835149  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.835455  435600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:52:53.835534  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.865173  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:53.873940  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:54.090887  435600 ssh_runner.go:195] Run: systemctl --version
	I0819 17:52:54.095407  435600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:52:54.240786  435600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 17:52:54.244829  435600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:52:54.268526  435600 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 17:52:54.268671  435600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:52:54.301504  435600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 17:52:54.301527  435600 start.go:495] detecting cgroup driver to use...
	I0819 17:52:54.301579  435600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 17:52:54.301646  435600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:52:54.317954  435600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:52:54.328974  435600 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:52:54.329036  435600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:52:54.343038  435600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:52:54.356914  435600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:52:54.437144  435600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:52:54.536059  435600 docker.go:233] disabling docker service ...
	I0819 17:52:54.536170  435600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:52:54.561009  435600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:52:54.572789  435600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:52:54.655975  435600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:52:54.740906  435600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:52:54.752669  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:52:54.768753  435600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:52:54.768844  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.778390  435600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:52:54.778455  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.788836  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.798811  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.808674  435600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:52:54.817958  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.828129  435600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.843819  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.853214  435600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:52:54.861864  435600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:52:54.869843  435600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:52:54.951070  435600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:52:55.078241  435600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:52:55.078367  435600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:52:55.082213  435600 start.go:563] Will wait 60s for crictl version
	I0819 17:52:55.082306  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:52:55.085961  435600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:52:55.132058  435600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 17:52:55.132216  435600 ssh_runner.go:195] Run: crio --version
	I0819 17:52:55.172806  435600 ssh_runner.go:195] Run: crio --version
	I0819 17:52:55.215621  435600 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 17:52:55.218302  435600 cli_runner.go:164] Run: docker network inspect addons-778133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:52:55.234250  435600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 17:52:55.237680  435600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:52:55.248391  435600 kubeadm.go:883] updating cluster {Name:addons-778133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:52:55.248514  435600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:52:55.248577  435600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:52:55.327711  435600 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:52:55.327732  435600 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:52:55.327785  435600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:52:55.364932  435600 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:52:55.364957  435600 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:52:55.364964  435600 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 17:52:55.365069  435600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-778133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:52:55.365153  435600 ssh_runner.go:195] Run: crio config
	I0819 17:52:55.414872  435600 cni.go:84] Creating CNI manager for ""
	I0819 17:52:55.414897  435600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:52:55.414909  435600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:52:55.414932  435600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-778133 NodeName:addons-778133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:52:55.415089  435600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-778133"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:52:55.415166  435600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:52:55.424196  435600 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:52:55.424282  435600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:52:55.432772  435600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 17:52:55.450351  435600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:52:55.468898  435600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 17:52:55.487124  435600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 17:52:55.490460  435600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:52:55.501118  435600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:52:55.587630  435600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:52:55.601693  435600 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133 for IP: 192.168.49.2
	I0819 17:52:55.601758  435600 certs.go:194] generating shared ca certs ...
	I0819 17:52:55.601790  435600 certs.go:226] acquiring lock for ca certs: {Name:mkc364a164a604cbf63463c0c33b0382c8bd91c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:55.602450  435600 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-429440/.minikube/ca.key
	I0819 17:52:55.920701  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt ...
	I0819 17:52:55.920733  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt: {Name:mk84e5bd91ccf3d6043b6e27954388f94bb2461d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:55.921341  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/ca.key ...
	I0819 17:52:55.921357  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/ca.key: {Name:mk8ba86f9bae0e688a1c6b9e22d920a748851a17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:55.922486  435600 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.key
	I0819 17:52:56.535188  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.crt ...
	I0819 17:52:56.535224  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.crt: {Name:mk6f1fc86ce7bfdf7d31502c33352c2d264f4667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:56.535401  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.key ...
	I0819 17:52:56.535416  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.key: {Name:mk377951487110cebed7bc7f6844bc68050b2a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:56.535498  435600 certs.go:256] generating profile certs ...
	I0819 17:52:56.535562  435600 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.key
	I0819 17:52:56.535583  435600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt with IP's: []
	I0819 17:52:57.225251  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt ...
	I0819 17:52:57.225284  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: {Name:mk580b58cae13ef6ef9e12b7bd4f045cb2386b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:57.225482  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.key ...
	I0819 17:52:57.225495  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.key: {Name:mk9aa22a4b7d94e3184414f40370806d2554e00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:57.225586  435600 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key.c6b921d4
	I0819 17:52:57.225606  435600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt.c6b921d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 17:52:57.778373  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt.c6b921d4 ...
	I0819 17:52:57.778405  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt.c6b921d4: {Name:mka10be691659db94ac0ae80c1c9fc1ba377b153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:57.778589  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key.c6b921d4 ...
	I0819 17:52:57.778605  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key.c6b921d4: {Name:mke58d611cea1f5604124e227ad5c804259fa988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:57.778695  435600 certs.go:381] copying /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt.c6b921d4 -> /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt
	I0819 17:52:57.778775  435600 certs.go:385] copying /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key.c6b921d4 -> /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key
	I0819 17:52:57.778828  435600 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.key
	I0819 17:52:57.778848  435600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.crt with IP's: []
	I0819 17:52:58.138260  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.crt ...
	I0819 17:52:58.138291  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.crt: {Name:mk3e89ea844ce45b8320e564497fe77665ea72c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:58.138891  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.key ...
	I0819 17:52:58.138907  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.key: {Name:mk309af0ce5fdf09daae71bd5a79b07fa68cad18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:58.139449  435600 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 17:52:58.139500  435600 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem (1082 bytes)
	I0819 17:52:58.139530  435600 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:52:58.139557  435600 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/key.pem (1679 bytes)
	I0819 17:52:58.140167  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:52:58.165186  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:52:58.190147  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:52:58.214350  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 17:52:58.238560  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 17:52:58.262270  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 17:52:58.286474  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:52:58.311178  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 17:52:58.335181  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:52:58.360433  435600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:52:58.378594  435600 ssh_runner.go:195] Run: openssl version
	I0819 17:52:58.384328  435600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:52:58.394731  435600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:52:58.398362  435600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:52:58.398456  435600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:52:58.405265  435600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:52:58.414661  435600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:52:58.418014  435600 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:52:58.418066  435600 kubeadm.go:392] StartCluster: {Name:addons-778133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:52:58.418148  435600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:52:58.418203  435600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:52:58.454343  435600 cri.go:89] found id: ""
	I0819 17:52:58.454413  435600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:52:58.463413  435600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:52:58.472397  435600 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 17:52:58.472516  435600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:52:58.483693  435600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:52:58.483716  435600 kubeadm.go:157] found existing configuration files:
	
	I0819 17:52:58.483779  435600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:52:58.492569  435600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:52:58.492636  435600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:52:58.501194  435600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:52:58.510115  435600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:52:58.510182  435600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:52:58.518906  435600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:52:58.527586  435600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:52:58.527670  435600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:52:58.536357  435600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:52:58.545236  435600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:52:58.545304  435600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:52:58.553990  435600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 17:52:58.590799  435600 kubeadm.go:310] W0819 17:52:58.590063    1180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:52:58.592095  435600 kubeadm.go:310] W0819 17:52:58.591472    1180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:52:58.612285  435600 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 17:52:58.684427  435600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:53:14.449959  435600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:53:14.450042  435600 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:53:14.450152  435600 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 17:53:14.450224  435600 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 17:53:14.450262  435600 kubeadm.go:310] OS: Linux
	I0819 17:53:14.450309  435600 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 17:53:14.450359  435600 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 17:53:14.450408  435600 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 17:53:14.450458  435600 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 17:53:14.450508  435600 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 17:53:14.450557  435600 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 17:53:14.450604  435600 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 17:53:14.450654  435600 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 17:53:14.450701  435600 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 17:53:14.450772  435600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:53:14.450866  435600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:53:14.450955  435600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:53:14.451017  435600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:53:14.453791  435600 out.go:235]   - Generating certificates and keys ...
	I0819 17:53:14.453880  435600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:53:14.453946  435600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:53:14.454026  435600 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:53:14.454086  435600 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:53:14.454150  435600 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:53:14.454201  435600 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:53:14.454256  435600 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:53:14.454376  435600 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-778133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:53:14.454431  435600 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:53:14.454544  435600 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-778133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:53:14.454613  435600 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:53:14.454681  435600 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:53:14.454727  435600 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:53:14.454784  435600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:53:14.454836  435600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:53:14.454893  435600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:53:14.454951  435600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:53:14.455016  435600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:53:14.455072  435600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:53:14.455157  435600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:53:14.455224  435600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:53:14.457629  435600 out.go:235]   - Booting up control plane ...
	I0819 17:53:14.457738  435600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:53:14.457832  435600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:53:14.457915  435600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:53:14.458034  435600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:53:14.458120  435600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:53:14.458161  435600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:53:14.458295  435600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:53:14.458398  435600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:53:14.458459  435600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001858221s
	I0819 17:53:14.458567  435600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:53:14.458643  435600 kubeadm.go:310] [api-check] The API server is healthy after 6.501929959s
	I0819 17:53:14.458766  435600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:53:14.458903  435600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:53:14.458972  435600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:53:14.459178  435600 kubeadm.go:310] [mark-control-plane] Marking the node addons-778133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:53:14.459245  435600 kubeadm.go:310] [bootstrap-token] Using token: a0y4tw.zf7e6vdo3kh8x28x
	I0819 17:53:14.461932  435600 out.go:235]   - Configuring RBAC rules ...
	I0819 17:53:14.462057  435600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:53:14.462164  435600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:53:14.462313  435600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:53:14.462483  435600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:53:14.462622  435600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:53:14.462723  435600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:53:14.462866  435600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:53:14.462916  435600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:53:14.462967  435600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:53:14.462976  435600 kubeadm.go:310] 
	I0819 17:53:14.463034  435600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:53:14.463041  435600 kubeadm.go:310] 
	I0819 17:53:14.463115  435600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:53:14.463123  435600 kubeadm.go:310] 
	I0819 17:53:14.463147  435600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:53:14.463207  435600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:53:14.463261  435600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:53:14.463269  435600 kubeadm.go:310] 
	I0819 17:53:14.463321  435600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:53:14.463327  435600 kubeadm.go:310] 
	I0819 17:53:14.463373  435600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:53:14.463381  435600 kubeadm.go:310] 
	I0819 17:53:14.463433  435600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:53:14.463508  435600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:53:14.463579  435600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:53:14.463587  435600 kubeadm.go:310] 
	I0819 17:53:14.463669  435600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:53:14.463745  435600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:53:14.463753  435600 kubeadm.go:310] 
	I0819 17:53:14.463834  435600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a0y4tw.zf7e6vdo3kh8x28x \
	I0819 17:53:14.463936  435600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e18b21b1696fc0b5c17033532881e73bdede18d2af0b9932aa5de205ca4b73 \
	I0819 17:53:14.463959  435600 kubeadm.go:310] 	--control-plane 
	I0819 17:53:14.463963  435600 kubeadm.go:310] 
	I0819 17:53:14.464046  435600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:53:14.464055  435600 kubeadm.go:310] 
	I0819 17:53:14.464146  435600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a0y4tw.zf7e6vdo3kh8x28x \
	I0819 17:53:14.464333  435600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e18b21b1696fc0b5c17033532881e73bdede18d2af0b9932aa5de205ca4b73 
	I0819 17:53:14.464347  435600 cni.go:84] Creating CNI manager for ""
	I0819 17:53:14.464356  435600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:53:14.467013  435600 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 17:53:14.469705  435600 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 17:53:14.473910  435600 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 17:53:14.473930  435600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 17:53:14.492843  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 17:53:14.783597  435600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:53:14.783733  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:14.783827  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-778133 minikube.k8s.io/updated_at=2024_08_19T17_53_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=addons-778133 minikube.k8s.io/primary=true
	I0819 17:53:14.940380  435600 ops.go:34] apiserver oom_adj: -16
	I0819 17:53:14.940476  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:15.440843  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:15.941308  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:16.441481  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:16.940605  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:17.441105  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:17.940578  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:18.440999  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:18.940591  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:19.043283  435600 kubeadm.go:1113] duration metric: took 4.259596356s to wait for elevateKubeSystemPrivileges
	I0819 17:53:19.043337  435600 kubeadm.go:394] duration metric: took 20.62527546s to StartCluster
	I0819 17:53:19.043374  435600 settings.go:142] acquiring lock: {Name:mk90a62cf51d9178249af9ac62d14840346a8775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:53:19.043551  435600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 17:53:19.043988  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/kubeconfig: {Name:mkf3f1794a92fe24d6cafa4b1b651286dbd5b9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:53:19.044269  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:53:19.044466  435600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:53:19.044638  435600 config.go:182] Loaded profile config "addons-778133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:53:19.044686  435600 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 17:53:19.044771  435600 addons.go:69] Setting yakd=true in profile "addons-778133"
	I0819 17:53:19.044796  435600 addons.go:234] Setting addon yakd=true in "addons-778133"
	I0819 17:53:19.044822  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.045292  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.045601  435600 addons.go:69] Setting inspektor-gadget=true in profile "addons-778133"
	I0819 17:53:19.045628  435600 addons.go:234] Setting addon inspektor-gadget=true in "addons-778133"
	I0819 17:53:19.045656  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.046067  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.046398  435600 addons.go:69] Setting metrics-server=true in profile "addons-778133"
	I0819 17:53:19.046432  435600 addons.go:234] Setting addon metrics-server=true in "addons-778133"
	I0819 17:53:19.046462  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.046924  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.047055  435600 addons.go:69] Setting cloud-spanner=true in profile "addons-778133"
	I0819 17:53:19.047080  435600 addons.go:234] Setting addon cloud-spanner=true in "addons-778133"
	I0819 17:53:19.047107  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.047483  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.047955  435600 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-778133"
	I0819 17:53:19.048027  435600 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-778133"
	I0819 17:53:19.048055  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.048501  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.051145  435600 addons.go:69] Setting default-storageclass=true in profile "addons-778133"
	I0819 17:53:19.051201  435600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-778133"
	I0819 17:53:19.051569  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.061323  435600 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-778133"
	I0819 17:53:19.061371  435600 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-778133"
	I0819 17:53:19.061406  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.061839  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.073715  435600 addons.go:69] Setting gcp-auth=true in profile "addons-778133"
	I0819 17:53:19.073813  435600 mustload.go:65] Loading cluster: addons-778133
	I0819 17:53:19.074050  435600 config.go:182] Loaded profile config "addons-778133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:53:19.074431  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.076294  435600 addons.go:69] Setting registry=true in profile "addons-778133"
	I0819 17:53:19.076345  435600 addons.go:234] Setting addon registry=true in "addons-778133"
	I0819 17:53:19.076384  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.076869  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.081321  435600 addons.go:69] Setting storage-provisioner=true in profile "addons-778133"
	I0819 17:53:19.081358  435600 addons.go:234] Setting addon storage-provisioner=true in "addons-778133"
	I0819 17:53:19.081401  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.081822  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.083372  435600 addons.go:69] Setting ingress=true in profile "addons-778133"
	I0819 17:53:19.083412  435600 addons.go:234] Setting addon ingress=true in "addons-778133"
	I0819 17:53:19.083457  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.083896  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.088880  435600 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-778133"
	I0819 17:53:19.088929  435600 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-778133"
	I0819 17:53:19.089294  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.100316  435600 addons.go:69] Setting ingress-dns=true in profile "addons-778133"
	I0819 17:53:19.100366  435600 addons.go:234] Setting addon ingress-dns=true in "addons-778133"
	I0819 17:53:19.100409  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.100873  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.100316  435600 addons.go:69] Setting volcano=true in profile "addons-778133"
	I0819 17:53:19.116501  435600 addons.go:234] Setting addon volcano=true in "addons-778133"
	I0819 17:53:19.116572  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.117090  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.100330  435600 addons.go:69] Setting volumesnapshots=true in profile "addons-778133"
	I0819 17:53:19.117664  435600 addons.go:234] Setting addon volumesnapshots=true in "addons-778133"
	I0819 17:53:19.117694  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.118084  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.131019  435600 out.go:177] * Verifying Kubernetes components...
	I0819 17:53:19.133985  435600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:53:19.142243  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 17:53:19.148942  435600 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 17:53:19.151852  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 17:53:19.151917  435600 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 17:53:19.152025  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.173802  435600 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 17:53:19.182043  435600 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 17:53:19.182112  435600 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 17:53:19.182216  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.197386  435600 addons.go:234] Setting addon default-storageclass=true in "addons-778133"
	I0819 17:53:19.197441  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.197882  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.198957  435600 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 17:53:19.201610  435600 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 17:53:19.201658  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 17:53:19.201753  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.208496  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 17:53:19.211427  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 17:53:19.214150  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 17:53:19.216877  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 17:53:19.219468  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 17:53:19.219568  435600 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 17:53:19.220975  435600 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 17:53:19.230252  435600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:53:19.248325  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.250787  435600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 17:53:19.250812  435600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 17:53:19.251027  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.252131  435600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:53:19.252204  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:53:19.252737  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.267758  435600 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 17:53:19.253384  435600 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 17:53:19.253496  435600 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:53:19.268568  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 17:53:19.268749  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.297072  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 17:53:19.299632  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 17:53:19.302150  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 17:53:19.302174  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 17:53:19.302248  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.316394  435600 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:53:19.316414  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 17:53:19.316475  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	W0819 17:53:19.330995  435600 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 17:53:19.333789  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.337575  435600 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-778133"
	I0819 17:53:19.337612  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.338042  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.366042  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 17:53:19.368569  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 17:53:19.368592  435600 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 17:53:19.368669  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.376012  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.380385  435600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 17:53:19.380869  435600 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 17:53:19.391356  435600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:53:19.391664  435600 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 17:53:19.391679  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 17:53:19.391746  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.407751  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 17:53:19.424343  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.424387  435600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:53:19.424401  435600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:53:19.424453  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.425312  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.425956  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.426306  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.427538  435600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:53:19.431214  435600 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:53:19.431236  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 17:53:19.431298  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.458566  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.495521  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.523666  435600 out.go:177]   - Using image docker.io/busybox:stable
	I0819 17:53:19.524827  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.536882  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.544758  435600 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 17:53:19.548569  435600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:53:19.548591  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 17:53:19.548653  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.550001  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.553886  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.597736  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.741088  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 17:53:19.741158  435600 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 17:53:19.917131  435600 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 17:53:19.917212  435600 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 17:53:19.987203  435600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:53:20.000404  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 17:53:20.000468  435600 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 17:53:20.003184  435600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 17:53:20.003246  435600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 17:53:20.012865  435600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 17:53:20.012957  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 17:53:20.082254  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:53:20.086304  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 17:53:20.086331  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 17:53:20.094614  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:53:20.101908  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 17:53:20.117266  435600 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 17:53:20.117293  435600 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 17:53:20.120030  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:53:20.120928  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:53:20.126924  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:53:20.129771  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:53:20.153471  435600 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 17:53:20.153557  435600 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 17:53:20.158099  435600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 17:53:20.158168  435600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 17:53:20.198017  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 17:53:20.198093  435600 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 17:53:20.202933  435600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 17:53:20.203001  435600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 17:53:20.244340  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 17:53:20.244414  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 17:53:20.285794  435600 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:53:20.285869  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 17:53:20.331354  435600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:53:20.331428  435600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 17:53:20.334826  435600 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 17:53:20.334903  435600 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 17:53:20.410747  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:53:20.410817  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 17:53:20.429640  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 17:53:20.429717  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 17:53:20.436985  435600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 17:53:20.437066  435600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 17:53:20.482877  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:53:20.490864  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:53:20.573219  435600 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 17:53:20.573293  435600 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 17:53:20.578867  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 17:53:20.578950  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 17:53:20.626119  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 17:53:20.626192  435600 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 17:53:20.658526  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:53:20.727978  435600 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 17:53:20.728056  435600 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 17:53:20.787119  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 17:53:20.787195  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 17:53:20.853899  435600 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:53:20.853967  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 17:53:20.870139  435600 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 17:53:20.870221  435600 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 17:53:20.942636  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 17:53:20.942702  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 17:53:20.991066  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:53:21.040983  435600 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:53:21.041055  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 17:53:21.107823  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 17:53:21.107901  435600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 17:53:21.226931  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:53:21.286903  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 17:53:21.286962  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 17:53:21.498862  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 17:53:21.498935  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 17:53:21.521379  435600 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.534100507s)
	I0819 17:53:21.521623  435600 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.113846223s)
	I0819 17:53:21.521660  435600 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 17:53:21.523121  435600 node_ready.go:35] waiting up to 6m0s for node "addons-778133" to be "Ready" ...
	I0819 17:53:21.705364  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:53:21.705445  435600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 17:53:21.886914  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:53:22.863456  435600 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-778133" context rescaled to 1 replicas
	I0819 17:53:23.722289  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:24.865105  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.782814258s)
	I0819 17:53:24.865189  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.770549818s)
	I0819 17:53:24.865391  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.763459728s)
	I0819 17:53:25.087167  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.967097096s)
	I0819 17:53:25.087370  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.966420614s)
	I0819 17:53:26.018123  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.891119426s)
	I0819 17:53:26.018159  435600 addons.go:475] Verifying addon ingress=true in "addons-778133"
	I0819 17:53:26.018330  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.888495641s)
	I0819 17:53:26.018374  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.535426159s)
	I0819 17:53:26.018449  435600 addons.go:475] Verifying addon registry=true in "addons-778133"
	I0819 17:53:26.018651  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.527715791s)
	I0819 17:53:26.018674  435600 addons.go:475] Verifying addon metrics-server=true in "addons-778133"
	I0819 17:53:26.018712  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.360117926s)
	I0819 17:53:26.018941  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.027794646s)
	W0819 17:53:26.019196  435600 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:53:26.019221  435600 retry.go:31] will retry after 359.450556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:53:26.019008  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.792007386s)
	I0819 17:53:26.021187  435600 out.go:177] * Verifying registry addon...
	I0819 17:53:26.021187  435600 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-778133 service yakd-dashboard -n yakd-dashboard
	
	I0819 17:53:26.021302  435600 out.go:177] * Verifying ingress addon...
	I0819 17:53:26.025543  435600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 17:53:26.025559  435600 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 17:53:26.048494  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:26.068514  435600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:53:26.068592  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:26.076420  435600 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 17:53:26.076493  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:26.375689  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.488731363s)
	I0819 17:53:26.375768  435600 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-778133"
	I0819 17:53:26.378665  435600 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 17:53:26.378872  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:53:26.382308  435600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 17:53:26.395827  435600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:53:26.395895  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:26.549746  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:26.550984  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:26.872488  435600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 17:53:26.872643  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:26.887323  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:26.898899  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:27.043584  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:27.045284  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:27.159918  435600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 17:53:27.241385  435600 addons.go:234] Setting addon gcp-auth=true in "addons-778133"
	I0819 17:53:27.241486  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:27.242039  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:27.277697  435600 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 17:53:27.277750  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:27.308407  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:27.401676  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:27.535762  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:27.537305  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:27.626609  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.247696637s)
	I0819 17:53:27.629562  435600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:53:27.631923  435600 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 17:53:27.634442  435600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 17:53:27.634498  435600 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 17:53:27.660609  435600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 17:53:27.660686  435600 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 17:53:27.681354  435600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:53:27.681425  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 17:53:27.701091  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:53:27.886640  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:28.029170  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:28.036708  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:28.379604  435600 addons.go:475] Verifying addon gcp-auth=true in "addons-778133"
	I0819 17:53:28.382480  435600 out.go:177] * Verifying gcp-auth addon...
	I0819 17:53:28.387338  435600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 17:53:28.404125  435600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 17:53:28.404194  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:28.404976  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:28.528725  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:28.533955  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:28.535160  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:28.885969  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:28.890302  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:29.033784  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:29.034984  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:29.387192  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:29.397238  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:29.530735  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:29.531865  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:29.887299  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:29.891503  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:30.033548  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:30.034759  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:30.386439  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:30.390594  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:30.534667  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:30.535830  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:30.536289  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:30.886445  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:30.890362  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:31.033986  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:31.034239  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:31.385642  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:31.390600  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:31.529314  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:31.530207  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:31.886040  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:31.890434  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:32.031160  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:32.031957  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:32.386522  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:32.392165  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:32.533939  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:32.535303  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:32.886512  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:32.891194  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:33.027901  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:33.029809  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:33.030732  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:33.386855  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:33.391071  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:33.530582  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:33.531502  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:33.886457  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:33.891075  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:34.029663  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:34.030390  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:34.386158  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:34.391076  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:34.529770  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:34.532608  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:34.886304  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:34.890645  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:35.031019  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:35.031315  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:35.386068  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:35.390642  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:35.526438  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:35.528496  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:35.529518  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:35.886456  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:35.890310  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:36.030620  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:36.032243  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:36.385607  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:36.390705  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:36.529311  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:36.529774  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:36.885877  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:36.890027  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:37.031954  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:37.033327  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:37.386054  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:37.390196  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:37.526482  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:37.529072  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:37.530071  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:37.885608  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:37.890502  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:38.030196  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:38.030708  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:38.386095  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:38.390204  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:38.528405  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:38.529466  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:38.886319  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:38.890408  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:39.029467  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:39.031385  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:39.386257  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:39.390595  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:39.528145  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:39.529900  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:39.530494  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:39.886652  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:39.891224  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:40.030132  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:40.030786  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:40.385805  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:40.393076  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:40.530964  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:40.531251  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:40.885860  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:40.891089  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:41.029423  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:41.030289  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:41.385846  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:41.391134  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:41.528396  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:41.529898  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:41.885677  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:41.890399  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:42.034559  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:42.034631  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:42.035825  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:42.385988  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:42.391017  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:42.531030  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:42.531516  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:42.886444  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:42.890243  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:43.029569  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:43.030418  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:43.385872  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:43.390292  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:43.529122  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:43.530901  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:43.886493  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:43.890768  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:44.028424  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:44.029795  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:44.385710  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:44.391090  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:44.527491  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:44.530660  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:44.531671  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:44.886826  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:44.890612  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:45.032767  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:45.033689  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:45.386397  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:45.390272  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:45.529185  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:45.530347  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:45.885972  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:45.890261  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:46.031447  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:46.033083  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:46.385663  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:46.390983  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:46.528431  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:46.530192  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:46.530922  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:46.886593  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:46.891271  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:47.030394  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:47.031485  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:47.386716  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:47.393693  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:47.529837  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:47.530613  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:47.886078  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:47.890256  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:48.030396  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:48.031369  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:48.385976  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:48.390219  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:48.529242  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:48.530601  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:48.531879  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:48.886681  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:48.891028  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:49.032007  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:49.032539  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:49.386500  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:49.391291  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:49.530958  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:49.531191  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:49.888407  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:49.891996  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:50.030947  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:50.032419  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:50.386349  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:50.390730  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:50.530094  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:50.530828  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:50.886948  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:50.890952  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:51.029298  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:51.031221  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:51.032093  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:51.385750  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:51.391137  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:51.530449  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:51.532119  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:51.885652  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:51.891530  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:52.029770  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:52.030277  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:52.385806  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:52.390928  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:52.529805  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:52.530177  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:52.887071  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:52.890976  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:53.030682  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:53.031713  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:53.386869  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:53.390255  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:53.526755  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:53.529868  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:53.530950  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:53.886650  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:53.890636  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:54.030233  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:54.031409  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:54.386515  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:54.391748  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:54.528639  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:54.529866  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:54.887626  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:54.890543  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:55.030333  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:55.031616  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:55.386399  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:55.390189  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:55.527654  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:55.531378  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:55.532699  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:55.886402  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:55.890549  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:56.030521  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:56.030736  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:56.386430  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:56.390731  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:56.530780  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:56.531669  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:56.886389  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:56.890066  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:57.031150  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:57.031312  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:57.386015  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:57.390902  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:57.530529  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:57.531334  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:57.886352  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:57.889982  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:58.027266  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:58.030878  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:58.032479  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:58.386235  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:58.391013  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:58.529311  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:58.531055  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:58.886274  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:58.891136  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:59.028084  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:59.029128  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:59.385698  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:59.390446  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:59.529306  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:59.529612  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:59.886150  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:59.891111  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:00.028706  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:54:00.045985  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:00.052584  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:00.386786  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:00.391305  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:00.533179  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:00.533901  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:00.886363  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:00.890439  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:01.031040  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:01.032033  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:01.386650  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:01.390985  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:01.530049  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:01.531177  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:01.886102  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:01.892102  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:02.030254  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:02.031111  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:02.385933  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:02.390925  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:02.527196  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:54:02.529284  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:02.531808  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:02.887194  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:02.891130  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:03.029548  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:03.030561  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:03.386855  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:03.391379  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:03.530289  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:03.531087  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:03.885822  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:03.891797  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:04.030076  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:04.031154  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:04.385888  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:04.391155  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:04.529652  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:04.530576  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:04.886221  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:04.891181  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:05.029884  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:54:05.032419  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:05.032786  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:05.388844  435600 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:54:05.388870  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:05.392895  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:05.556940  435600 node_ready.go:49] node "addons-778133" has status "Ready":"True"
	I0819 17:54:05.556966  435600 node_ready.go:38] duration metric: took 44.033649296s for node "addons-778133" to be "Ready" ...
	I0819 17:54:05.556982  435600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:54:05.576864  435600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:54:05.576891  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:05.577915  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:05.609884  435600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l8nmv" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:05.890665  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:05.897733  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:06.040767  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:06.042671  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:06.389867  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:06.393316  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:06.531126  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:06.531680  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:06.887508  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:06.890619  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:07.032729  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:07.034572  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:07.117273  435600 pod_ready.go:93] pod "coredns-6f6b679f8f-l8nmv" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.117297  435600 pod_ready.go:82] duration metric: took 1.507376465s for pod "coredns-6f6b679f8f-l8nmv" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.117321  435600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.122145  435600 pod_ready.go:93] pod "etcd-addons-778133" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.122211  435600 pod_ready.go:82] duration metric: took 4.880868ms for pod "etcd-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.122240  435600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.131152  435600 pod_ready.go:93] pod "kube-apiserver-addons-778133" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.131182  435600 pod_ready.go:82] duration metric: took 8.918924ms for pod "kube-apiserver-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.131194  435600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.136343  435600 pod_ready.go:93] pod "kube-controller-manager-addons-778133" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.136368  435600 pod_ready.go:82] duration metric: took 5.165686ms for pod "kube-controller-manager-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.136383  435600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzvz5" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.143997  435600 pod_ready.go:93] pod "kube-proxy-jzvz5" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.144024  435600 pod_ready.go:82] duration metric: took 7.633643ms for pod "kube-proxy-jzvz5" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.144036  435600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.387874  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:07.390522  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:07.528898  435600 pod_ready.go:93] pod "kube-scheduler-addons-778133" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.528927  435600 pod_ready.go:82] duration metric: took 384.88353ms for pod "kube-scheduler-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.528941  435600 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.531092  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:07.531777  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:07.887511  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:07.890489  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:08.029727  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:08.032725  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:08.389133  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:08.392289  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:08.534650  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:08.536444  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:08.889005  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:08.892092  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:09.034886  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:09.036310  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:09.389370  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:09.392378  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:09.533334  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:09.534092  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:09.539413  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:09.889410  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:09.893962  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:10.032779  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:10.037461  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:10.386743  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:10.391349  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:10.542686  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:10.547951  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:10.886798  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:10.890179  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:11.033595  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:11.038430  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:11.387587  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:11.390331  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:11.531253  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:11.531846  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:11.889109  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:11.891587  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:12.030992  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:12.047831  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:12.063237  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:12.389223  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:12.394026  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:12.534371  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:12.537987  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:12.891333  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:12.893646  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:13.030726  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:13.031421  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:13.387181  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:13.391343  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:13.531124  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:13.532276  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:13.887074  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:13.890887  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:14.034618  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:14.035146  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:14.388095  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:14.391031  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:14.532110  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:14.533195  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:14.538007  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:14.887631  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:14.890443  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:15.036048  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:15.038327  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:15.389995  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:15.394108  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:15.532081  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:15.533211  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:15.887863  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:15.891204  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:16.032615  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:16.031650  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:16.389029  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:16.392243  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:16.541460  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:16.543800  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:16.888148  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:16.891722  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:17.037586  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:17.038925  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:17.043491  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:17.402262  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:17.404319  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:17.566823  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:17.570485  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:17.964035  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:17.964907  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:18.059538  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:18.065624  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:18.398861  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:18.400576  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:18.538169  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:18.539900  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:18.892322  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:18.898838  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:19.033037  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:19.034423  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:19.399958  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:19.400626  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:19.537803  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:19.538685  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:19.541016  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:19.888552  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:19.893733  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:20.039483  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:20.039947  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:20.387868  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:20.390898  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:20.532045  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:20.532405  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:20.887113  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:20.890444  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:21.030337  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:21.031109  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:21.387927  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:21.390699  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:21.533903  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:21.535971  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:21.543839  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:21.887574  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:21.890616  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:22.030833  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:22.033251  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:22.387408  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:22.390680  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:22.530188  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:22.531835  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:22.891287  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:22.894319  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:23.031673  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:23.032596  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:23.387215  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:23.391012  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:23.530224  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:23.531692  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:23.887849  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:23.898080  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:24.032636  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:24.034418  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:24.038224  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:24.388195  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:24.392410  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:24.533325  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:24.534509  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:24.894703  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:24.896693  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:25.033221  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:25.034188  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:25.387885  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:25.391013  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:25.531310  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:25.531906  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:25.887771  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:25.890575  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:26.031739  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:26.032621  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:26.387727  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:26.398036  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:26.532121  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:26.533152  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:26.542468  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:26.887769  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:26.890700  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:27.042221  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:27.043494  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:27.387876  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:27.399106  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:27.588431  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:27.588537  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:27.890106  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:27.893305  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:28.039311  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:28.043923  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:28.389014  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:28.394724  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:28.557533  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:28.559056  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:28.590870  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:28.888085  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:28.890323  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:29.034913  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:29.036415  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:29.387026  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:29.390958  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:29.534763  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:29.536789  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:29.889296  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:29.894385  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:30.030420  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:30.034926  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:30.387186  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:30.390871  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:30.535751  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:30.536758  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:30.888196  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:30.891406  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:31.030036  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:31.032558  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:31.042930  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:31.388035  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:31.390835  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:31.532172  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:31.533884  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:31.888308  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:31.891307  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:32.035089  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:32.035716  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:32.389139  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:32.394476  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:32.532585  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:32.533707  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:32.888028  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:32.890628  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:33.030203  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:33.031231  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:33.388033  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:33.390371  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:33.530998  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:33.532009  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:33.536099  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:33.887690  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:33.890405  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:34.030999  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:34.032183  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:34.389073  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:34.393534  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:34.529853  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:34.531631  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:34.891133  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:34.893727  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:35.030814  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:35.034364  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:35.392388  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:35.395725  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:35.538183  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:35.539132  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:35.545697  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:35.888209  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:35.891180  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:36.030923  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:36.032934  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:36.387696  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:36.390412  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:36.530908  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:36.531574  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:36.888079  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:36.891361  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:37.031826  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:37.032606  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:37.390119  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:37.395426  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:37.530962  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:37.532678  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:37.887428  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:37.890537  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:38.032566  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:38.033705  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:38.037891  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:38.388483  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:38.392400  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:38.536399  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:38.539164  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:38.893679  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:38.894460  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:39.034080  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:39.049556  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:39.388692  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:39.391816  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:39.530530  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:39.532186  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:39.887189  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:39.891429  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:40.032316  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:40.032807  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:40.388478  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:40.391296  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:40.533905  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:40.536536  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:40.538180  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:40.889987  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:40.890830  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:41.033748  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:41.035225  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:41.387688  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:41.391213  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:41.532350  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:41.533865  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:41.888048  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:41.893152  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:42.030543  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:42.032108  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:42.388943  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:42.392464  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:42.530743  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:42.531440  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:42.887905  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:42.890515  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:43.029487  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:43.031617  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:43.036592  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:43.388134  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:43.391910  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:43.530657  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:43.531110  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:43.887365  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:43.891187  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:44.031670  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:44.032743  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:44.390958  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:44.393015  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:44.530037  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:44.532205  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:44.889665  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:44.892781  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:45.038014  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:45.042049  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:45.044402  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:45.387957  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:45.390543  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:45.530479  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:45.531275  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:45.903354  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:45.914574  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:46.088289  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:46.089287  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:46.387639  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:46.399011  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:46.531306  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:46.531706  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:46.888653  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:46.891407  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:47.031390  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:47.031656  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:47.387538  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:47.390752  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:47.534401  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:47.539579  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:47.543769  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:47.888729  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:47.891895  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:48.035780  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:48.037286  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:48.386934  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:48.390604  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:48.536717  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:48.539934  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:48.892047  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:48.893396  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:49.030794  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:49.032161  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:49.395425  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:49.399690  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:49.533389  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:49.534301  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:49.887771  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:49.891400  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:50.033533  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:50.033857  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:50.039912  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:50.389634  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:50.393585  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:50.542305  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:50.542889  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:50.888099  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:50.891292  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:51.032816  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:51.033527  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:51.390638  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:51.396879  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:51.531682  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:51.532782  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:51.887157  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:51.891407  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:52.029270  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:52.030754  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:52.386973  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:52.390844  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:52.533258  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:52.534970  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:52.538274  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:52.887588  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:52.893025  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:53.031856  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:53.035127  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:53.388260  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:53.391939  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:53.531627  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:53.533981  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:53.889008  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:53.895288  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:54.034065  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:54.035910  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:54.387847  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:54.391332  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:54.532247  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:54.541467  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:54.553586  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:54.889078  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:54.890937  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:55.035824  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:55.042561  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:55.387724  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:55.391090  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:55.531326  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:55.531590  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:55.889738  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:55.892865  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:56.033842  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:56.035178  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:56.387720  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:56.390879  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:56.531271  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:56.531797  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:56.887923  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:56.890468  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:57.029697  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:57.032343  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:57.036118  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:57.386784  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:57.392820  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:57.531227  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:57.532053  435600 kapi.go:107] duration metric: took 1m31.506512137s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 17:54:57.886836  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:57.890881  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:58.030533  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:58.389125  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:58.394836  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:58.531673  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:58.887543  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:58.890869  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:59.030434  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:59.036286  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:59.387527  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:59.390639  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:59.538024  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:59.888328  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:59.895710  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:00.050978  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:00.387498  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:00.394139  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:00.546408  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:00.894585  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:00.895240  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:01.030666  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:01.039280  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:01.393960  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:01.397669  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:01.530166  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:01.911710  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:01.919329  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:02.041970  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:02.388959  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:02.393291  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:02.531642  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:02.888583  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:02.891575  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:03.031055  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:03.388110  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:03.391010  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:03.532488  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:03.543327  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:03.888710  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:03.892951  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:04.031445  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:04.393485  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:04.397677  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:04.530905  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:04.888987  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:04.893545  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:05.032065  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:05.388527  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:05.391621  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:05.532019  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:05.887859  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:05.890939  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:06.032539  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:06.041189  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:06.388540  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:06.392489  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:06.538783  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:06.887909  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:06.890434  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:07.033311  435600 kapi.go:107] duration metric: took 1m41.007746575s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 17:55:07.389571  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:07.394721  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:07.893385  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:07.895241  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:08.389149  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:08.393891  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:08.534733  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:08.887925  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:08.890642  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:09.387917  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:09.390615  435600 kapi.go:107] duration metric: took 1m41.003275572s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 17:55:09.392258  435600 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-778133 cluster.
	I0819 17:55:09.394092  435600 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 17:55:09.395478  435600 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 17:55:09.888456  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:10.387801  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:10.547386  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:10.887309  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:11.393162  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:11.888338  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:12.387572  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:12.887660  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:13.035936  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:13.387923  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:13.887121  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:14.387911  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:14.887137  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:15.063120  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:15.387642  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:15.888009  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:16.387357  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:16.887197  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:17.442036  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:17.544589  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:17.888603  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:18.387107  435600 kapi.go:107] duration metric: took 1m52.004796552s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 17:55:18.388520  435600 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 17:55:18.389855  435600 addons.go:510] duration metric: took 1m59.345150953s for enable addons: enabled=[storage-provisioner cloud-spanner default-storageclass nvidia-device-plugin storage-provisioner-rancher ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 17:55:20.035985  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:22.036145  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:24.535033  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:26.035668  435600 pod_ready.go:93] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"True"
	I0819 17:55:26.035695  435600 pod_ready.go:82] duration metric: took 1m18.506745381s for pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace to be "Ready" ...
	I0819 17:55:26.035708  435600 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jf6ms" in "kube-system" namespace to be "Ready" ...
	I0819 17:55:26.041475  435600 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-jf6ms" in "kube-system" namespace has status "Ready":"True"
	I0819 17:55:26.041501  435600 pod_ready.go:82] duration metric: took 5.784742ms for pod "nvidia-device-plugin-daemonset-jf6ms" in "kube-system" namespace to be "Ready" ...
	I0819 17:55:26.041524  435600 pod_ready.go:39] duration metric: took 1m20.48450572s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:55:26.041541  435600 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:55:26.041576  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:55:26.041643  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:55:26.094173  435600 cri.go:89] found id: "73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:26.094201  435600 cri.go:89] found id: ""
	I0819 17:55:26.094210  435600 logs.go:276] 1 containers: [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5]
	I0819 17:55:26.094266  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.097811  435600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:55:26.097914  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:55:26.145659  435600 cri.go:89] found id: "74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:26.145737  435600 cri.go:89] found id: ""
	I0819 17:55:26.145751  435600 logs.go:276] 1 containers: [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839]
	I0819 17:55:26.145818  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.149430  435600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:55:26.149507  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:55:26.193399  435600 cri.go:89] found id: "cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:26.193475  435600 cri.go:89] found id: ""
	I0819 17:55:26.193497  435600 logs.go:276] 1 containers: [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b]
	I0819 17:55:26.193588  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.198000  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:55:26.198082  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:55:26.240912  435600 cri.go:89] found id: "d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:26.240936  435600 cri.go:89] found id: ""
	I0819 17:55:26.240945  435600 logs.go:276] 1 containers: [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0]
	I0819 17:55:26.241027  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.244567  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:55:26.244642  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:55:26.285113  435600 cri.go:89] found id: "665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:26.285134  435600 cri.go:89] found id: ""
	I0819 17:55:26.285141  435600 logs.go:276] 1 containers: [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481]
	I0819 17:55:26.285221  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.288980  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:55:26.289110  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:55:26.329797  435600 cri.go:89] found id: "186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:26.329820  435600 cri.go:89] found id: ""
	I0819 17:55:26.329827  435600 logs.go:276] 1 containers: [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160]
	I0819 17:55:26.329884  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.333464  435600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:55:26.333546  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:55:26.375207  435600 cri.go:89] found id: "7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:26.375231  435600 cri.go:89] found id: ""
	I0819 17:55:26.375240  435600 logs.go:276] 1 containers: [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97]
	I0819 17:55:26.375294  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.379041  435600 logs.go:123] Gathering logs for kindnet [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97] ...
	I0819 17:55:26.379064  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:26.443338  435600 logs.go:123] Gathering logs for dmesg ...
	I0819 17:55:26.443790  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:55:26.462360  435600 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:55:26.462397  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:55:26.659763  435600 logs.go:123] Gathering logs for etcd [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839] ...
	I0819 17:55:26.659794  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:26.724577  435600 logs.go:123] Gathering logs for coredns [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b] ...
	I0819 17:55:26.724614  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:26.770312  435600 logs.go:123] Gathering logs for kube-scheduler [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0] ...
	I0819 17:55:26.770344  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:26.820060  435600 logs.go:123] Gathering logs for kube-proxy [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481] ...
	I0819 17:55:26.820091  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:26.858028  435600 logs.go:123] Gathering logs for kube-controller-manager [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160] ...
	I0819 17:55:26.858055  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:26.941542  435600 logs.go:123] Gathering logs for container status ...
	I0819 17:55:26.941588  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:55:26.999922  435600 logs.go:123] Gathering logs for kubelet ...
	I0819 17:55:26.999968  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:55:27.091991  435600 logs.go:123] Gathering logs for kube-apiserver [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5] ...
	I0819 17:55:27.092029  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:27.166929  435600 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:55:27.166960  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:55:29.765953  435600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:55:29.780182  435600 api_server.go:72] duration metric: took 2m10.735654266s to wait for apiserver process to appear ...
	I0819 17:55:29.780209  435600 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:55:29.780274  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:55:29.780333  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:55:29.818703  435600 cri.go:89] found id: "73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:29.818721  435600 cri.go:89] found id: ""
	I0819 17:55:29.818729  435600 logs.go:276] 1 containers: [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5]
	I0819 17:55:29.818784  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.822223  435600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:55:29.822298  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:55:29.860809  435600 cri.go:89] found id: "74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:29.860829  435600 cri.go:89] found id: ""
	I0819 17:55:29.860837  435600 logs.go:276] 1 containers: [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839]
	I0819 17:55:29.860893  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.864512  435600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:55:29.864598  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:55:29.907361  435600 cri.go:89] found id: "cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:29.907384  435600 cri.go:89] found id: ""
	I0819 17:55:29.907393  435600 logs.go:276] 1 containers: [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b]
	I0819 17:55:29.907450  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.911800  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:55:29.911874  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:55:29.950945  435600 cri.go:89] found id: "d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:29.951019  435600 cri.go:89] found id: ""
	I0819 17:55:29.951041  435600 logs.go:276] 1 containers: [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0]
	I0819 17:55:29.951115  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.954853  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:55:29.954951  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:55:29.993143  435600 cri.go:89] found id: "665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:29.993168  435600 cri.go:89] found id: ""
	I0819 17:55:29.993176  435600 logs.go:276] 1 containers: [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481]
	I0819 17:55:29.993268  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.997285  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:55:29.997413  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:55:30.052856  435600 cri.go:89] found id: "186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:30.052880  435600 cri.go:89] found id: ""
	I0819 17:55:30.052889  435600 logs.go:276] 1 containers: [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160]
	I0819 17:55:30.052976  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:30.057285  435600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:55:30.057434  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:55:30.104928  435600 cri.go:89] found id: "7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:30.105012  435600 cri.go:89] found id: ""
	I0819 17:55:30.105340  435600 logs.go:276] 1 containers: [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97]
	I0819 17:55:30.105425  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:30.110449  435600 logs.go:123] Gathering logs for etcd [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839] ...
	I0819 17:55:30.110482  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:30.170329  435600 logs.go:123] Gathering logs for kube-scheduler [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0] ...
	I0819 17:55:30.170374  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:30.233251  435600 logs.go:123] Gathering logs for kindnet [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97] ...
	I0819 17:55:30.233285  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:30.299245  435600 logs.go:123] Gathering logs for container status ...
	I0819 17:55:30.299282  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:55:30.358540  435600 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:55:30.358572  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:55:30.455689  435600 logs.go:123] Gathering logs for kubelet ...
	I0819 17:55:30.455725  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:55:30.546164  435600 logs.go:123] Gathering logs for dmesg ...
	I0819 17:55:30.546199  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:55:30.563842  435600 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:55:30.563871  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:55:30.730432  435600 logs.go:123] Gathering logs for kube-apiserver [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5] ...
	I0819 17:55:30.730468  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:30.846178  435600 logs.go:123] Gathering logs for coredns [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b] ...
	I0819 17:55:30.846218  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:30.892652  435600 logs.go:123] Gathering logs for kube-proxy [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481] ...
	I0819 17:55:30.892685  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:30.965711  435600 logs.go:123] Gathering logs for kube-controller-manager [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160] ...
	I0819 17:55:30.965755  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:33.573165  435600 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 17:55:33.581064  435600 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 17:55:33.582050  435600 api_server.go:141] control plane version: v1.31.0
	I0819 17:55:33.582077  435600 api_server.go:131] duration metric: took 3.801859602s to wait for apiserver health ...
	I0819 17:55:33.582088  435600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:55:33.582110  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:55:33.582177  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:55:33.629523  435600 cri.go:89] found id: "73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:33.629551  435600 cri.go:89] found id: ""
	I0819 17:55:33.629560  435600 logs.go:276] 1 containers: [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5]
	I0819 17:55:33.629620  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.633357  435600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:55:33.633437  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:55:33.672899  435600 cri.go:89] found id: "74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:33.672925  435600 cri.go:89] found id: ""
	I0819 17:55:33.672933  435600 logs.go:276] 1 containers: [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839]
	I0819 17:55:33.672993  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.676707  435600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:55:33.676790  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:55:33.733144  435600 cri.go:89] found id: "cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:33.733220  435600 cri.go:89] found id: ""
	I0819 17:55:33.733258  435600 logs.go:276] 1 containers: [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b]
	I0819 17:55:33.733361  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.737494  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:55:33.737566  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:55:33.778400  435600 cri.go:89] found id: "d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:33.778425  435600 cri.go:89] found id: ""
	I0819 17:55:33.778434  435600 logs.go:276] 1 containers: [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0]
	I0819 17:55:33.778489  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.782214  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:55:33.782286  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:55:33.823855  435600 cri.go:89] found id: "665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:33.823879  435600 cri.go:89] found id: ""
	I0819 17:55:33.823888  435600 logs.go:276] 1 containers: [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481]
	I0819 17:55:33.823945  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.827658  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:55:33.827752  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:55:33.872012  435600 cri.go:89] found id: "186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:33.872033  435600 cri.go:89] found id: ""
	I0819 17:55:33.872041  435600 logs.go:276] 1 containers: [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160]
	I0819 17:55:33.872120  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.877010  435600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:55:33.877108  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:55:33.923071  435600 cri.go:89] found id: "7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:33.923142  435600 cri.go:89] found id: ""
	I0819 17:55:33.923165  435600 logs.go:276] 1 containers: [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97]
	I0819 17:55:33.923255  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.926912  435600 logs.go:123] Gathering logs for dmesg ...
	I0819 17:55:33.926985  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:55:33.943943  435600 logs.go:123] Gathering logs for coredns [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b] ...
	I0819 17:55:33.944015  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:33.989898  435600 logs.go:123] Gathering logs for kube-scheduler [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0] ...
	I0819 17:55:33.989930  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:34.045104  435600 logs.go:123] Gathering logs for container status ...
	I0819 17:55:34.045151  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:55:34.093048  435600 logs.go:123] Gathering logs for kubelet ...
	I0819 17:55:34.093080  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:55:34.179867  435600 logs.go:123] Gathering logs for kube-apiserver [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5] ...
	I0819 17:55:34.179903  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:34.236800  435600 logs.go:123] Gathering logs for etcd [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839] ...
	I0819 17:55:34.236834  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:34.297375  435600 logs.go:123] Gathering logs for kube-proxy [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481] ...
	I0819 17:55:34.297410  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:34.338907  435600 logs.go:123] Gathering logs for kube-controller-manager [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160] ...
	I0819 17:55:34.338941  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:34.420737  435600 logs.go:123] Gathering logs for kindnet [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97] ...
	I0819 17:55:34.420772  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:34.480945  435600 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:55:34.480976  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:55:34.572749  435600 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:55:34.572786  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:55:37.229390  435600 system_pods.go:59] 18 kube-system pods found
	I0819 17:55:37.229437  435600 system_pods.go:61] "coredns-6f6b679f8f-l8nmv" [ff489ec3-aafb-48e5-8b44-b3a688cdf8f4] Running
	I0819 17:55:37.229445  435600 system_pods.go:61] "csi-hostpath-attacher-0" [978351fc-eedc-46b0-8837-0408dbfe0733] Running
	I0819 17:55:37.229450  435600 system_pods.go:61] "csi-hostpath-resizer-0" [6f90ec57-00c5-4d1b-aa5a-8ed4775b934b] Running
	I0819 17:55:37.229454  435600 system_pods.go:61] "csi-hostpathplugin-qvmqd" [a19ee3c9-56ff-43fe-81d5-14d7b24057e2] Running
	I0819 17:55:37.229458  435600 system_pods.go:61] "etcd-addons-778133" [52f3011b-a727-4704-92b2-bf4441e9d845] Running
	I0819 17:55:37.229462  435600 system_pods.go:61] "kindnet-mnkhw" [48608aa5-fb50-4961-b41f-4c6fecece03c] Running
	I0819 17:55:37.229467  435600 system_pods.go:61] "kube-apiserver-addons-778133" [054b4e48-3d18-4a58-8af9-31c4acc00c4f] Running
	I0819 17:55:37.229473  435600 system_pods.go:61] "kube-controller-manager-addons-778133" [2de63fdd-9e5e-4ddb-87b0-b089a732b85f] Running
	I0819 17:55:37.229477  435600 system_pods.go:61] "kube-ingress-dns-minikube" [e58e7c8f-b313-444b-931c-07a556978e9f] Running
	I0819 17:55:37.229481  435600 system_pods.go:61] "kube-proxy-jzvz5" [e48349fd-8601-4066-913b-aa441c366b2b] Running
	I0819 17:55:37.229492  435600 system_pods.go:61] "kube-scheduler-addons-778133" [13fc982d-7f2c-4031-879b-81b8c20005f2] Running
	I0819 17:55:37.229496  435600 system_pods.go:61] "metrics-server-8988944d9-f95p9" [01704ab9-a4d6-4222-9216-dc0418048204] Running
	I0819 17:55:37.229500  435600 system_pods.go:61] "nvidia-device-plugin-daemonset-jf6ms" [64aac524-645a-4d2f-a7f0-16e99e357126] Running
	I0819 17:55:37.229504  435600 system_pods.go:61] "registry-6fb4cdfc84-jf8nh" [615dc4af-719f-4bfd-bd2e-4fe6e87fe0dc] Running
	I0819 17:55:37.229512  435600 system_pods.go:61] "registry-proxy-srkxv" [7eaaa77e-fb85-406d-86c6-1735b5cd1aeb] Running
	I0819 17:55:37.229521  435600 system_pods.go:61] "snapshot-controller-56fcc65765-8wkps" [04123999-4603-4c7e-ad1d-4b44f5b00eee] Running
	I0819 17:55:37.229526  435600 system_pods.go:61] "snapshot-controller-56fcc65765-psg4j" [317a730b-3c4a-419a-84a1-354749d88a48] Running
	I0819 17:55:37.229529  435600 system_pods.go:61] "storage-provisioner" [e2f4308c-5eed-4a83-86eb-cc99af197a86] Running
	I0819 17:55:37.229536  435600 system_pods.go:74] duration metric: took 3.647441971s to wait for pod list to return data ...
	I0819 17:55:37.229549  435600 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:55:37.232336  435600 default_sa.go:45] found service account: "default"
	I0819 17:55:37.232363  435600 default_sa.go:55] duration metric: took 2.806305ms for default service account to be created ...
	I0819 17:55:37.232379  435600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:55:37.242413  435600 system_pods.go:86] 18 kube-system pods found
	I0819 17:55:37.242458  435600 system_pods.go:89] "coredns-6f6b679f8f-l8nmv" [ff489ec3-aafb-48e5-8b44-b3a688cdf8f4] Running
	I0819 17:55:37.242466  435600 system_pods.go:89] "csi-hostpath-attacher-0" [978351fc-eedc-46b0-8837-0408dbfe0733] Running
	I0819 17:55:37.242471  435600 system_pods.go:89] "csi-hostpath-resizer-0" [6f90ec57-00c5-4d1b-aa5a-8ed4775b934b] Running
	I0819 17:55:37.242476  435600 system_pods.go:89] "csi-hostpathplugin-qvmqd" [a19ee3c9-56ff-43fe-81d5-14d7b24057e2] Running
	I0819 17:55:37.242481  435600 system_pods.go:89] "etcd-addons-778133" [52f3011b-a727-4704-92b2-bf4441e9d845] Running
	I0819 17:55:37.242487  435600 system_pods.go:89] "kindnet-mnkhw" [48608aa5-fb50-4961-b41f-4c6fecece03c] Running
	I0819 17:55:37.242492  435600 system_pods.go:89] "kube-apiserver-addons-778133" [054b4e48-3d18-4a58-8af9-31c4acc00c4f] Running
	I0819 17:55:37.242496  435600 system_pods.go:89] "kube-controller-manager-addons-778133" [2de63fdd-9e5e-4ddb-87b0-b089a732b85f] Running
	I0819 17:55:37.242500  435600 system_pods.go:89] "kube-ingress-dns-minikube" [e58e7c8f-b313-444b-931c-07a556978e9f] Running
	I0819 17:55:37.242504  435600 system_pods.go:89] "kube-proxy-jzvz5" [e48349fd-8601-4066-913b-aa441c366b2b] Running
	I0819 17:55:37.242508  435600 system_pods.go:89] "kube-scheduler-addons-778133" [13fc982d-7f2c-4031-879b-81b8c20005f2] Running
	I0819 17:55:37.242512  435600 system_pods.go:89] "metrics-server-8988944d9-f95p9" [01704ab9-a4d6-4222-9216-dc0418048204] Running
	I0819 17:55:37.242516  435600 system_pods.go:89] "nvidia-device-plugin-daemonset-jf6ms" [64aac524-645a-4d2f-a7f0-16e99e357126] Running
	I0819 17:55:37.242521  435600 system_pods.go:89] "registry-6fb4cdfc84-jf8nh" [615dc4af-719f-4bfd-bd2e-4fe6e87fe0dc] Running
	I0819 17:55:37.242525  435600 system_pods.go:89] "registry-proxy-srkxv" [7eaaa77e-fb85-406d-86c6-1735b5cd1aeb] Running
	I0819 17:55:37.242529  435600 system_pods.go:89] "snapshot-controller-56fcc65765-8wkps" [04123999-4603-4c7e-ad1d-4b44f5b00eee] Running
	I0819 17:55:37.242533  435600 system_pods.go:89] "snapshot-controller-56fcc65765-psg4j" [317a730b-3c4a-419a-84a1-354749d88a48] Running
	I0819 17:55:37.242537  435600 system_pods.go:89] "storage-provisioner" [e2f4308c-5eed-4a83-86eb-cc99af197a86] Running
	I0819 17:55:37.242546  435600 system_pods.go:126] duration metric: took 10.159637ms to wait for k8s-apps to be running ...
	I0819 17:55:37.242553  435600 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:55:37.242614  435600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:55:37.254550  435600 system_svc.go:56] duration metric: took 11.986559ms WaitForService to wait for kubelet
	I0819 17:55:37.254582  435600 kubeadm.go:582] duration metric: took 2m18.210057398s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:55:37.254603  435600 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:55:37.258002  435600 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 17:55:37.258037  435600 node_conditions.go:123] node cpu capacity is 2
	I0819 17:55:37.258052  435600 node_conditions.go:105] duration metric: took 3.442377ms to run NodePressure ...
	I0819 17:55:37.258065  435600 start.go:241] waiting for startup goroutines ...
	I0819 17:55:37.258073  435600 start.go:246] waiting for cluster config update ...
	I0819 17:55:37.258094  435600 start.go:255] writing updated cluster config ...
	I0819 17:55:37.258394  435600 ssh_runner.go:195] Run: rm -f paused
	I0819 17:55:37.625312  435600 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:55:37.627067  435600 out.go:177] * Done! kubectl is now configured to use "addons-778133" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 17:59:58 addons-778133 crio[961]: time="2024-08-19 17:59:58.151700332Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ba1be6d5-30e4-4d3a-81e5-b71d70da372a name=/runtime.v1.ImageService/ImageStatus
	Aug 19 17:59:58 addons-778133 crio[961]: time="2024-08-19 17:59:58.152706242Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-78fvr/hello-world-app" id=179eb93a-36a6-4d54-8508-49960270c2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 17:59:58 addons-778133 crio[961]: time="2024-08-19 17:59:58.152795282Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 19 17:59:58 addons-778133 crio[961]: time="2024-08-19 17:59:58.168760966Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0d13aaff45c41a8dd20a8d2e91a4dc7c0f404509e09e865fa2db035c8a24bc77/merged/etc/passwd: no such file or directory"
	Aug 19 17:59:58 addons-778133 crio[961]: time="2024-08-19 17:59:58.168810631Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0d13aaff45c41a8dd20a8d2e91a4dc7c0f404509e09e865fa2db035c8a24bc77/merged/etc/group: no such file or directory"
	Aug 19 17:59:58 addons-778133 crio[961]: time="2024-08-19 17:59:58.208341563Z" level=info msg="Created container 57e97b6aa75d035e78ef2f7f6199e2a71b504224f513c1a86570933778188377: default/hello-world-app-55bf9c44b4-78fvr/hello-world-app" id=179eb93a-36a6-4d54-8508-49960270c2c1 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 17:59:58 addons-778133 crio[961]: time="2024-08-19 17:59:58.208978062Z" level=info msg="Starting container: 57e97b6aa75d035e78ef2f7f6199e2a71b504224f513c1a86570933778188377" id=7406d7d6-2cb8-4d70-8492-eb489a62b4dc name=/runtime.v1.RuntimeService/StartContainer
	Aug 19 17:59:58 addons-778133 crio[961]: time="2024-08-19 17:59:58.217543078Z" level=info msg="Started container" PID=8297 containerID=57e97b6aa75d035e78ef2f7f6199e2a71b504224f513c1a86570933778188377 description=default/hello-world-app-55bf9c44b4-78fvr/hello-world-app id=7406d7d6-2cb8-4d70-8492-eb489a62b4dc name=/runtime.v1.RuntimeService/StartContainer sandboxID=b94cb1c6fa206f06964a498ca211a77f2b2a1ce19a5a588af2579c036eaa482f
	Aug 19 17:59:59 addons-778133 crio[961]: time="2024-08-19 17:59:59.047721388Z" level=info msg="Removing container: 671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a" id=0371585a-4dd2-442b-9dab-98a88afb3b03 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 17:59:59 addons-778133 crio[961]: time="2024-08-19 17:59:59.069802976Z" level=info msg="Removed container 671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=0371585a-4dd2-442b-9dab-98a88afb3b03 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:00:00 addons-778133 crio[961]: time="2024-08-19 18:00:00.753470517Z" level=info msg="Stopping container: c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3 (timeout: 2s)" id=1d1ab9fb-b1b6-4573-a49f-d4b0ebd086e4 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.769440154Z" level=warning msg="Stopping container c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=1d1ab9fb-b1b6-4573-a49f-d4b0ebd086e4 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:00:02 addons-778133 conmon[4710]: conmon c50cc7c89f55a67d65ed <ninfo>: container 4722 exited with status 137
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.911264627Z" level=info msg="Stopped container c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3: ingress-nginx/ingress-nginx-controller-bc57996ff-tdsn4/controller" id=1d1ab9fb-b1b6-4573-a49f-d4b0ebd086e4 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.911748162Z" level=info msg="Stopping pod sandbox: 7b023129bbdeeb5946ddfdd834998f0ee25b9bb06addf023605963380dafa054" id=37e810c3-025f-4d42-aeb7-9a6acf8cb246 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.915592936Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-FRAEYAXEHTFXOXSD - [0:0]\n:KUBE-HP-2VZN53L2GHRDFIOB - [0:0]\n-X KUBE-HP-FRAEYAXEHTFXOXSD\n-X KUBE-HP-2VZN53L2GHRDFIOB\nCOMMIT\n"
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.923478820Z" level=info msg="Closing host port tcp:80"
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.923533251Z" level=info msg="Closing host port tcp:443"
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.924941574Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.924977175Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.925179616Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-tdsn4 Namespace:ingress-nginx ID:7b023129bbdeeb5946ddfdd834998f0ee25b9bb06addf023605963380dafa054 UID:357abd46-1b72-46b7-94de-37d0233d4f8a NetNS:/var/run/netns/b9fba250-13b5-47c5-9dc9-7308822c2bb1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.925324466Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-tdsn4 from CNI network \"kindnet\" (type=ptp)"
	Aug 19 18:00:02 addons-778133 crio[961]: time="2024-08-19 18:00:02.945868643Z" level=info msg="Stopped pod sandbox: 7b023129bbdeeb5946ddfdd834998f0ee25b9bb06addf023605963380dafa054" id=37e810c3-025f-4d42-aeb7-9a6acf8cb246 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:03 addons-778133 crio[961]: time="2024-08-19 18:00:03.067930378Z" level=info msg="Removing container: c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3" id=1f70a8b1-4621-4b64-bd4c-8339ab64d0cf name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:00:03 addons-778133 crio[961]: time="2024-08-19 18:00:03.083848146Z" level=info msg="Removed container c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3: ingress-nginx/ingress-nginx-controller-bc57996ff-tdsn4/controller" id=1f70a8b1-4621-4b64-bd4c-8339ab64d0cf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57e97b6aa75d0       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   b94cb1c6fa206       hello-world-app-55bf9c44b4-78fvr
	d0c76f8a51cff       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                              2 minutes ago       Running             nginx                     0                   697d05b4cfc6d       nginx
	96e92759b80ac       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 minutes ago       Running             headlamp                  0                   21001031e7905       headlamp-57fb76fcdb-bsc82
	54874153e84f7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago       Running             busybox                   0                   c8bf5dcfe3392       busybox
	1beb519a6558c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   5 minutes ago       Exited              patch                     0                   9784d5fbc2b46       ingress-nginx-admission-patch-hk6gt
	68e92778b6e92       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   5 minutes ago       Exited              create                    0                   32c8f4d54a6d3       ingress-nginx-admission-create-fptqb
	78d4968fc5b74       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   2d60f28975825       metrics-server-8988944d9-f95p9
	2f59708cc8e1b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago       Running             storage-provisioner       0                   fb1f3160eba87       storage-provisioner
	cb8ee644d62a0       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             6 minutes ago       Running             coredns                   0                   4f5369fda24fb       coredns-6f6b679f8f-l8nmv
	7ac2f38031322       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                           6 minutes ago       Running             kindnet-cni               0                   cf9e71b77b860       kindnet-mnkhw
	665fbf835c117       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                             6 minutes ago       Running             kube-proxy                0                   f84e0226d1528       kube-proxy-jzvz5
	d6d1155da1ee8       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                             7 minutes ago       Running             kube-scheduler            0                   ef351c987be1c       kube-scheduler-addons-778133
	73059fa5f98e6       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                             7 minutes ago       Running             kube-apiserver            0                   3ce8f38d2889f       kube-apiserver-addons-778133
	74f05f4f63420       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             7 minutes ago       Running             etcd                      0                   0c40df88d60af       etcd-addons-778133
	186afb1dba18c       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                             7 minutes ago       Running             kube-controller-manager   0                   c7f30670453db       kube-controller-manager-addons-778133
	
	
	==> coredns [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b] <==
	[INFO] 10.244.0.14:55011 - 42576 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003109691s
	[INFO] 10.244.0.14:54826 - 37430 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000131853s
	[INFO] 10.244.0.14:54826 - 56627 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000240298s
	[INFO] 10.244.0.14:60519 - 59371 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150372s
	[INFO] 10.244.0.14:60519 - 57839 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000282045s
	[INFO] 10.244.0.14:42949 - 37512 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043363s
	[INFO] 10.244.0.14:42949 - 10122 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000234169s
	[INFO] 10.244.0.14:45041 - 51772 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056154s
	[INFO] 10.244.0.14:45041 - 59966 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004036s
	[INFO] 10.244.0.14:34910 - 26992 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00188617s
	[INFO] 10.244.0.14:34910 - 46701 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001900462s
	[INFO] 10.244.0.14:56948 - 29684 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000099461s
	[INFO] 10.244.0.14:56948 - 1782 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045513s
	[INFO] 10.244.0.20:50743 - 37083 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000219769s
	[INFO] 10.244.0.20:45557 - 33365 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143398s
	[INFO] 10.244.0.20:57368 - 60701 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157412s
	[INFO] 10.244.0.20:43160 - 41118 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142241s
	[INFO] 10.244.0.20:45603 - 19671 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127882s
	[INFO] 10.244.0.20:35560 - 60628 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118759s
	[INFO] 10.244.0.20:36075 - 42310 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002683346s
	[INFO] 10.244.0.20:37533 - 3033 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002363722s
	[INFO] 10.244.0.20:41044 - 32113 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000688879s
	[INFO] 10.244.0.20:51797 - 31819 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002364445s
	[INFO] 10.244.0.22:41573 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151611s
	[INFO] 10.244.0.22:42228 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000632175s
	
	
	==> describe nodes <==
	Name:               addons-778133
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-778133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=addons-778133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_53_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-778133
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-778133
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:00:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:57:49 +0000   Mon, 19 Aug 2024 17:53:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:57:49 +0000   Mon, 19 Aug 2024 17:53:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:57:49 +0000   Mon, 19 Aug 2024 17:53:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:57:49 +0000   Mon, 19 Aug 2024 17:54:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-778133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bfc93155ec64edc9657b547521008c5
	  System UUID:                e768685e-9a74-48fe-97d3-1ac53dac6fc4
	  Boot ID:                    b7846bbc-2ca5-4e44-8ea6-94e5c03d25fd
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  default                     hello-world-app-55bf9c44b4-78fvr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  headlamp                    headlamp-57fb76fcdb-bsc82                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 coredns-6f6b679f8f-l8nmv                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m50s
	  kube-system                 etcd-addons-778133                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m54s
	  kube-system                 kindnet-mnkhw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m51s
	  kube-system                 kube-apiserver-addons-778133             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 kube-controller-manager-addons-778133    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 kube-proxy-jzvz5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-scheduler-addons-778133             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 metrics-server-8988944d9-f95p9           100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m44s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 6m42s                kube-proxy       
	  Normal   Starting                 7m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m2s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m2s (x8 over 7m2s)  kubelet          Node addons-778133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m2s (x8 over 7m2s)  kubelet          Node addons-778133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m2s (x7 over 7m2s)  kubelet          Node addons-778133 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m55s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m55s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m55s                kubelet          Node addons-778133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m55s                kubelet          Node addons-778133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m55s                kubelet          Node addons-778133 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m51s                node-controller  Node addons-778133 event: Registered Node addons-778133 in Controller
	  Normal   NodeReady                6m3s                 kubelet          Node addons-778133 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 16:56] systemd-journald[216]: Failed to send stream file descriptor to service manager: Connection refused
	[Aug19 17:22] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug19 17:30] hrtimer: interrupt took 7461724 ns
	
	
	==> etcd [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839] <==
	{"level":"info","ts":"2024-08-19T17:53:22.333319Z","caller":"traceutil/trace.go:171","msg":"trace[540576265] linearizableReadLoop","detail":"{readStateIndex:412; appliedIndex:412; }","duration":"108.634884ms","start":"2024-08-19T17:53:22.224669Z","end":"2024-08-19T17:53:22.333303Z","steps":["trace[540576265] 'read index received'  (duration: 108.628919ms)","trace[540576265] 'applied index is now lower than readState.Index'  (duration: 4.644µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:53:22.483479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.787806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-778133\" ","response":"range_response_count:1 size:5738"}
	{"level":"info","ts":"2024-08-19T17:53:22.483642Z","caller":"traceutil/trace.go:171","msg":"trace[1808572898] range","detail":"{range_begin:/registry/minions/addons-778133; range_end:; response_count:1; response_revision:404; }","duration":"258.963228ms","start":"2024-08-19T17:53:22.224664Z","end":"2024-08-19T17:53:22.483628Z","steps":["trace[1808572898] 'agreement among raft nodes before linearized reading'  (duration: 232.043632ms)","trace[1808572898] 'range keys from in-memory index tree'  (duration: 26.681152ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:53:22.498867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.148464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:53:22.498928Z","caller":"traceutil/trace.go:171","msg":"trace[1294876105] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:405; }","duration":"274.215548ms","start":"2024-08-19T17:53:22.224700Z","end":"2024-08-19T17:53:22.498916Z","steps":["trace[1294876105] 'agreement among raft nodes before linearized reading'  (duration: 274.119805ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:53:24.149643Z","caller":"traceutil/trace.go:171","msg":"trace[857139026] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"109.888977ms","start":"2024-08-19T17:53:24.039742Z","end":"2024-08-19T17:53:24.149631Z","steps":["trace[857139026] 'process raft request'  (duration: 72.042863ms)","trace[857139026] 'compare'  (duration: 37.016585ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:53:24.149897Z","caller":"traceutil/trace.go:171","msg":"trace[2115532947] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"109.061467ms","start":"2024-08-19T17:53:24.040826Z","end":"2024-08-19T17:53:24.149888Z","steps":["trace[2115532947] 'process raft request'  (duration: 108.17823ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:53:24.149998Z","caller":"traceutil/trace.go:171","msg":"trace[1426841752] linearizableReadLoop","detail":"{readStateIndex:492; appliedIndex:490; }","duration":"108.999807ms","start":"2024-08-19T17:53:24.040992Z","end":"2024-08-19T17:53:24.149991Z","steps":["trace[1426841752] 'read index received'  (duration: 70.722811ms)","trace[1426841752] 'applied index is now lower than readState.Index'  (duration: 38.276438ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:53:24.150053Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.04747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:53:24.237730Z","caller":"traceutil/trace.go:171","msg":"trace[1743215363] range","detail":"{range_begin:/registry/clusterrolebindings/yakd-dashboard; range_end:; response_count:0; response_revision:486; }","duration":"196.725561ms","start":"2024-08-19T17:53:24.040986Z","end":"2024-08-19T17:53:24.237712Z","steps":["trace[1743215363] 'agreement among raft nodes before linearized reading'  (duration: 109.023978ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:53:24.150067Z","caller":"traceutil/trace.go:171","msg":"trace[432888513] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"107.88784ms","start":"2024-08-19T17:53:24.041574Z","end":"2024-08-19T17:53:24.149462Z","steps":["trace[432888513] 'process raft request'  (duration: 107.58269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.228275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.188443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2024-08-19T17:53:24.253045Z","caller":"traceutil/trace.go:171","msg":"trace[257717253] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:490; }","duration":"210.964407ms","start":"2024-08-19T17:53:24.042060Z","end":"2024-08-19T17:53:24.253024Z","steps":["trace[257717253] 'agreement among raft nodes before linearized reading'  (duration: 184.309749ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.207822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:4996"}
	{"level":"info","ts":"2024-08-19T17:53:24.253505Z","caller":"traceutil/trace.go:171","msg":"trace[118409466] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:490; }","duration":"140.962573ms","start":"2024-08-19T17:53:24.112531Z","end":"2024-08-19T17:53:24.253494Z","steps":["trace[118409466] 'agreement among raft nodes before linearized reading'  (duration: 120.620824ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.470651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-778133\" ","response":"range_response_count:1 size:5738"}
	{"level":"info","ts":"2024-08-19T17:53:24.254794Z","caller":"traceutil/trace.go:171","msg":"trace[1358600118] range","detail":"{range_begin:/registry/minions/addons-778133; range_end:; response_count:1; response_revision:490; }","duration":"142.407078ms","start":"2024-08-19T17:53:24.112376Z","end":"2024-08-19T17:53:24.254783Z","steps":["trace[1358600118] 'agreement among raft nodes before linearized reading'  (duration: 121.415136ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.561644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-778133\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-08-19T17:53:24.256762Z","caller":"traceutil/trace.go:171","msg":"trace[427476606] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-778133; range_end:; response_count:1; response_revision:490; }","duration":"144.407861ms","start":"2024-08-19T17:53:24.112339Z","end":"2024-08-19T17:53:24.256747Z","steps":["trace[427476606] 'agreement among raft nodes before linearized reading'  (duration: 121.525377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.634635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:53:24.260492Z","caller":"traceutil/trace.go:171","msg":"trace[1113090609] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:490; }","duration":"148.191629ms","start":"2024-08-19T17:53:24.112289Z","end":"2024-08-19T17:53:24.260481Z","steps":["trace[1113090609] 'agreement among raft nodes before linearized reading'  (duration: 121.6262ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233950Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.720804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-08-19T17:53:24.260723Z","caller":"traceutil/trace.go:171","msg":"trace[159367268] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:490; }","duration":"209.489953ms","start":"2024-08-19T17:53:24.051226Z","end":"2024-08-19T17:53:24.260716Z","steps":["trace[159367268] 'agreement among raft nodes before linearized reading'  (duration: 182.70939ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.064928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:53:24.260849Z","caller":"traceutil/trace.go:171","msg":"trace[1871651477] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:490; }","duration":"209.939552ms","start":"2024-08-19T17:53:24.050902Z","end":"2024-08-19T17:53:24.260842Z","steps":["trace[1871651477] 'agreement among raft nodes before linearized reading'  (duration: 183.056156ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:00:08 up  1:42,  0 users,  load average: 0.08, 1.36, 2.78
	Linux addons-778133 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97] <==
	E0819 17:58:48.742967       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 17:58:55.022667       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:58:55.022715       1 main.go:299] handling current node
	I0819 17:59:05.021432       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:59:05.021469       1 main.go:299] handling current node
	W0819 17:59:06.474509       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:59:06.474543       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 17:59:15.024448       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:59:15.024592       1 main.go:299] handling current node
	W0819 17:59:20.258754       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:59:20.258789       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 17:59:25.021747       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:59:25.021788       1 main.go:299] handling current node
	I0819 17:59:35.021937       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:59:35.022061       1 main.go:299] handling current node
	W0819 17:59:39.182391       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:59:39.182428       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 17:59:45.022194       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:59:45.022344       1 main.go:299] handling current node
	W0819 17:59:45.353904       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 17:59:45.353948       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 17:59:55.021928       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:59:55.021971       1 main.go:299] handling current node
	I0819 18:00:05.022144       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:00:05.022182       1 main.go:299] handling current node
	
	
	==> kube-apiserver [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5] <==
	I0819 17:56:37.748568       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0819 17:56:40.166950       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0819 17:56:40.178326       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0819 17:56:40.189522       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0819 17:56:55.190255       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0819 17:57:04.485630       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.485684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:57:04.506912       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.506984       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:57:04.601005       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.601130       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:57:04.602249       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.602394       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:57:04.720080       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.720200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 17:57:05.601798       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 17:57:05.721185       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 17:57:05.735480       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0819 17:57:11.428170       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.217.251"}
	I0819 17:57:30.299442       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 17:57:31.332189       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 17:57:35.931103       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 17:57:36.255080       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.23.252"}
	I0819 17:59:57.099037       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.254.203"}
	E0819 17:59:59.089258       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160] <==
	W0819 17:58:42.483899       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:58:42.483942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:58:59.534806       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:58:59.534862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:59:18.909826       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:59:18.909868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:59:19.163831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:59:19.163960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:59:21.437175       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:59:21.437314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:59:48.967666       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:59:48.967711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 17:59:56.846804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.05404ms"
	I0819 17:59:56.853959       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.03042ms"
	I0819 17:59:56.862532       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.44753ms"
	I0819 17:59:56.862686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.845µs"
	W0819 17:59:58.078715       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:59:58.078758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 17:59:59.076379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.301015ms"
	I0819 17:59:59.076477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="53.439µs"
	I0819 17:59:59.725587       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 17:59:59.728060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.456µs"
	I0819 17:59:59.734203       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0819 18:00:02.248068       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:00:02.248115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481] <==
	I0819 17:53:23.126781       1 server_linux.go:66] "Using iptables proxy"
	I0819 17:53:25.380403       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 17:53:25.398358       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:53:25.608063       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 17:53:25.608209       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:53:25.613045       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:53:25.613675       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:53:25.613748       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:53:25.632029       1 config.go:197] "Starting service config controller"
	I0819 17:53:25.632078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:53:25.632110       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:53:25.632115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:53:25.632608       1 config.go:326] "Starting node config controller"
	I0819 17:53:25.632628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:53:25.732530       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:53:25.735776       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:53:25.735810       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0] <==
	W0819 17:53:11.001559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:53:11.004107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.001588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 17:53:11.004252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.833134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:53:11.833258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.845910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:53:11.846018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.863522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:53:11.863640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.962266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 17:53:11.962383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.047930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:53:12.048047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.076410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:53:12.076458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.118469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:53:12.118526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.125759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:53:12.125811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.277982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:53:12.278028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.315006       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:53:12.315049       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 17:53:15.472040       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:59:58 addons-778133 kubelet[1494]: I0819 17:59:58.257779    1494 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8gpg\" (UniqueName: \"kubernetes.io/projected/e58e7c8f-b313-444b-931c-07a556978e9f-kube-api-access-z8gpg\") pod \"e58e7c8f-b313-444b-931c-07a556978e9f\" (UID: \"e58e7c8f-b313-444b-931c-07a556978e9f\") "
	Aug 19 17:59:58 addons-778133 kubelet[1494]: I0819 17:59:58.259947    1494 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58e7c8f-b313-444b-931c-07a556978e9f-kube-api-access-z8gpg" (OuterVolumeSpecName: "kube-api-access-z8gpg") pod "e58e7c8f-b313-444b-931c-07a556978e9f" (UID: "e58e7c8f-b313-444b-931c-07a556978e9f"). InnerVolumeSpecName "kube-api-access-z8gpg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 17:59:58 addons-778133 kubelet[1494]: I0819 17:59:58.358556    1494 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z8gpg\" (UniqueName: \"kubernetes.io/projected/e58e7c8f-b313-444b-931c-07a556978e9f-kube-api-access-z8gpg\") on node \"addons-778133\" DevicePath \"\""
	Aug 19 17:59:58 addons-778133 kubelet[1494]: I0819 17:59:58.726610    1494 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 17:59:59 addons-778133 kubelet[1494]: I0819 17:59:59.045828    1494 scope.go:117] "RemoveContainer" containerID="671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a"
	Aug 19 17:59:59 addons-778133 kubelet[1494]: I0819 17:59:59.070236    1494 scope.go:117] "RemoveContainer" containerID="671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a"
	Aug 19 17:59:59 addons-778133 kubelet[1494]: E0819 17:59:59.070631    1494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a\": container with ID starting with 671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a not found: ID does not exist" containerID="671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a"
	Aug 19 17:59:59 addons-778133 kubelet[1494]: I0819 17:59:59.070661    1494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a"} err="failed to get container status \"671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a\": rpc error: code = NotFound desc = could not find container \"671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a\": container with ID starting with 671105c0fcf94223b267eedcedb4440a7a8fcd0cab3d03c22a5a70fa8ba8f22a not found: ID does not exist"
	Aug 19 17:59:59 addons-778133 kubelet[1494]: I0819 17:59:59.081141    1494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-78fvr" podStartSLOduration=2.15345803 podStartE2EDuration="3.081122164s" podCreationTimestamp="2024-08-19 17:59:56 +0000 UTC" firstStartedPulling="2024-08-19 17:59:57.222714913 +0000 UTC m=+403.590238935" lastFinishedPulling="2024-08-19 17:59:58.150379048 +0000 UTC m=+404.517903069" observedRunningTime="2024-08-19 17:59:59.057711899 +0000 UTC m=+405.425235921" watchObservedRunningTime="2024-08-19 17:59:59.081122164 +0000 UTC m=+405.448646186"
	Aug 19 17:59:59 addons-778133 kubelet[1494]: I0819 17:59:59.728932    1494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58e7c8f-b313-444b-931c-07a556978e9f" path="/var/lib/kubelet/pods/e58e7c8f-b313-444b-931c-07a556978e9f/volumes"
	Aug 19 18:00:01 addons-778133 kubelet[1494]: I0819 18:00:01.728794    1494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f269c72-b97d-459c-8426-63e29c7a7746" path="/var/lib/kubelet/pods/1f269c72-b97d-459c-8426-63e29c7a7746/volumes"
	Aug 19 18:00:01 addons-778133 kubelet[1494]: I0819 18:00:01.729457    1494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbdcc5f-65a9-4417-b184-ce68fffd184a" path="/var/lib/kubelet/pods/abbdcc5f-65a9-4417-b184-ce68fffd184a/volumes"
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.046931    1494 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg78t\" (UniqueName: \"kubernetes.io/projected/357abd46-1b72-46b7-94de-37d0233d4f8a-kube-api-access-tg78t\") pod \"357abd46-1b72-46b7-94de-37d0233d4f8a\" (UID: \"357abd46-1b72-46b7-94de-37d0233d4f8a\") "
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.047005    1494 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/357abd46-1b72-46b7-94de-37d0233d4f8a-webhook-cert\") pod \"357abd46-1b72-46b7-94de-37d0233d4f8a\" (UID: \"357abd46-1b72-46b7-94de-37d0233d4f8a\") "
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.049577    1494 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357abd46-1b72-46b7-94de-37d0233d4f8a-kube-api-access-tg78t" (OuterVolumeSpecName: "kube-api-access-tg78t") pod "357abd46-1b72-46b7-94de-37d0233d4f8a" (UID: "357abd46-1b72-46b7-94de-37d0233d4f8a"). InnerVolumeSpecName "kube-api-access-tg78t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.050281    1494 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/357abd46-1b72-46b7-94de-37d0233d4f8a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "357abd46-1b72-46b7-94de-37d0233d4f8a" (UID: "357abd46-1b72-46b7-94de-37d0233d4f8a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.065508    1494 scope.go:117] "RemoveContainer" containerID="c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3"
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.084136    1494 scope.go:117] "RemoveContainer" containerID="c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3"
	Aug 19 18:00:03 addons-778133 kubelet[1494]: E0819 18:00:03.084599    1494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3\": container with ID starting with c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3 not found: ID does not exist" containerID="c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3"
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.084644    1494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3"} err="failed to get container status \"c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3\": rpc error: code = NotFound desc = could not find container \"c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3\": container with ID starting with c50cc7c89f55a67d65ed450c0e4f63ceec7f2f51cbbebcf5cd5d66f6cd84e8c3 not found: ID does not exist"
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.147992    1494 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/357abd46-1b72-46b7-94de-37d0233d4f8a-webhook-cert\") on node \"addons-778133\" DevicePath \"\""
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.148037    1494 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tg78t\" (UniqueName: \"kubernetes.io/projected/357abd46-1b72-46b7-94de-37d0233d4f8a-kube-api-access-tg78t\") on node \"addons-778133\" DevicePath \"\""
	Aug 19 18:00:03 addons-778133 kubelet[1494]: I0819 18:00:03.727849    1494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="357abd46-1b72-46b7-94de-37d0233d4f8a" path="/var/lib/kubelet/pods/357abd46-1b72-46b7-94de-37d0233d4f8a/volumes"
	Aug 19 18:00:03 addons-778133 kubelet[1494]: E0819 18:00:03.991430    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090403991143021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:00:03 addons-778133 kubelet[1494]: E0819 18:00:03.991470    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090403991143021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2f59708cc8e1b0b8e7dfd0401a210142e0eed0afc80bb2a9f073bd6240219ca3] <==
	I0819 17:54:06.393834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 17:54:06.405817       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 17:54:06.406877       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 17:54:06.419929       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 17:54:06.420156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-778133_483e0bf7-169c-4c08-80c6-1a281c4de92b!
	I0819 17:54:06.420829       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00224464-263e-42b7-bd36-2bcb2ab3a0ec", APIVersion:"v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-778133_483e0bf7-169c-4c08-80c6-1a281c4de92b became leader
	I0819 17:54:06.521882       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-778133_483e0bf7-169c-4c08-80c6-1a281c4de92b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-778133 -n addons-778133
helpers_test.go:261: (dbg) Run:  kubectl --context addons-778133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (351.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.140945ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-f95p9" [01704ab9-a4d6-4222-9216-dc0418048204] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004033191s
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (109.716269ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 4m10.826594164s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (88.363481ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 4m14.472160983s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (105.983215ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 4m18.552178421s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (99.500962ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 4m23.83006149s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (91.508735ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 4m30.227415283s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (90.248383ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 4m43.481011012s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (81.888831ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 5m6.47797581s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (83.673647ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 5m33.736075651s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (87.554403ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 6m5.191586345s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (86.083082ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 7m8.320376463s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (93.151904ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 8m13.517119949s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (92.313472ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 8m43.874199714s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-778133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-778133 top pods -n kube-system: exit status 1 (84.711257ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-l8nmv, age: 9m52.82854611s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-778133
helpers_test.go:235: (dbg) docker inspect addons-778133:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44",
	        "Created": "2024-08-19T17:52:50.954183116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 436108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T17:52:51.084486799Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1082065554095668b21dfc58cfca3febbc96bb8424fcaec6e38d6ee040df84c8",
	        "ResolvConfPath": "/var/lib/docker/containers/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44/hostname",
	        "HostsPath": "/var/lib/docker/containers/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44/hosts",
	        "LogPath": "/var/lib/docker/containers/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44/04d2b6f0984ad45506a450daf6bbf12d98582b3d6c50251160fae1280a483a44-json.log",
	        "Name": "/addons-778133",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-778133:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-778133",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/94ea42193065398cd4079dfea372f0e98dd209023968d26efb88ebd211723e1c-init/diff:/var/lib/docker/overlay2/18c6643ae063556b6e8c1e5b89d206551c41c973a0328ed325f1a299d228eb84/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94ea42193065398cd4079dfea372f0e98dd209023968d26efb88ebd211723e1c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94ea42193065398cd4079dfea372f0e98dd209023968d26efb88ebd211723e1c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94ea42193065398cd4079dfea372f0e98dd209023968d26efb88ebd211723e1c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-778133",
	                "Source": "/var/lib/docker/volumes/addons-778133/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-778133",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-778133",
	                "name.minikube.sigs.k8s.io": "addons-778133",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58ad6a33cf42256571749241bb2bb8dd1b1a4c6ece618561dda5752029711b53",
	            "SandboxKey": "/var/run/docker/netns/58ad6a33cf42",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-778133": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ddf82f9e1e4cfa011e39367f54d35ae59db28a37a95c5531afcbd77f13f87fc1",
	                    "EndpointID": "db6c8f345048c659fe266ad293bbb9be9b7bfaf3319506b539d009b2f4f76d1f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-778133",
	                        "04d2b6f0984a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-778133 -n addons-778133
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-778133 logs -n 25: (1.427639691s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-552596 | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | download-docker-552596                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-552596                                                                   | download-docker-552596 | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-383479   | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | binary-mirror-383479                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45625                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-383479                                                                     | binary-mirror-383479   | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:52 UTC |
	| addons  | enable dashboard -p                                                                         | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | addons-778133                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | addons-778133                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-778133 --wait=true                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:55 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:55 UTC | 19 Aug 24 17:55 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-778133 ip                                                                            | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | -p addons-778133                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-778133 ssh cat                                                                       | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	|         | /opt/local-path-provisioner/pvc-de919d21-52a1-44ba-882f-4f4cb571fe76_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:57 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-778133 addons                                                                        | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-778133 addons                                                                        | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | addons-778133                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | -p addons-778133                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC | 19 Aug 24 17:57 UTC |
	|         | addons-778133                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-778133 ssh curl -s                                                                   | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:57 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-778133 ip                                                                            | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 17:59 UTC |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 17:59 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-778133 addons disable                                                                | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 18:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-778133 addons                                                                        | addons-778133          | jenkins | v1.33.1 | 19 Aug 24 18:03 UTC | 19 Aug 24 18:03 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:52:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:52:27.122467  435600 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:52:27.124658  435600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:52:27.124677  435600 out.go:358] Setting ErrFile to fd 2...
	I0819 17:52:27.124683  435600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:52:27.124963  435600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 17:52:27.125449  435600 out.go:352] Setting JSON to false
	I0819 17:52:27.126308  435600 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5694,"bootTime":1724084253,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 17:52:27.126386  435600 start.go:139] virtualization:  
	I0819 17:52:27.129034  435600 out.go:177] * [addons-778133] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 17:52:27.132200  435600 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:52:27.132372  435600 notify.go:220] Checking for updates...
	I0819 17:52:27.135563  435600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:52:27.138289  435600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 17:52:27.140302  435600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	I0819 17:52:27.142345  435600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 17:52:27.144334  435600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:52:27.146356  435600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:52:27.169447  435600 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:52:27.169574  435600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:52:27.229408  435600 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:52:27.219992119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:52:27.229519  435600 docker.go:307] overlay module found
	I0819 17:52:27.231379  435600 out.go:177] * Using the docker driver based on user configuration
	I0819 17:52:27.232634  435600 start.go:297] selected driver: docker
	I0819 17:52:27.232650  435600 start.go:901] validating driver "docker" against <nil>
	I0819 17:52:27.232665  435600 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:52:27.233312  435600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:52:27.285196  435600 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:52:27.275652498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:52:27.285404  435600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:52:27.285635  435600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:52:27.287582  435600 out.go:177] * Using Docker driver with root privileges
	I0819 17:52:27.289719  435600 cni.go:84] Creating CNI manager for ""
	I0819 17:52:27.289744  435600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:52:27.289755  435600 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:52:27.289842  435600 start.go:340] cluster config:
	{Name:addons-778133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:52:27.291300  435600 out.go:177] * Starting "addons-778133" primary control-plane node in "addons-778133" cluster
	I0819 17:52:27.293694  435600 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 17:52:27.295003  435600 out.go:177] * Pulling base image v0.0.44-1724062045-19478 ...
	I0819 17:52:27.297374  435600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:52:27.297435  435600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0819 17:52:27.297458  435600 cache.go:56] Caching tarball of preloaded images
	I0819 17:52:27.297462  435600 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local docker daemon
	I0819 17:52:27.297540  435600 preload.go:172] Found /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0819 17:52:27.297550  435600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:52:27.297911  435600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/config.json ...
	I0819 17:52:27.297942  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/config.json: {Name:mk5de3d37436266e25961fb00c0c5a84a91bf9ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:27.313071  435600 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:52:27.313195  435600 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory
	I0819 17:52:27.313221  435600 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory, skipping pull
	I0819 17:52:27.313232  435600 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b exists in cache, skipping pull
	I0819 17:52:27.313241  435600 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b as a tarball
	I0819 17:52:27.313251  435600 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b from local cache
	I0819 17:52:43.944311  435600 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b from cached tarball
	I0819 17:52:43.944351  435600 cache.go:194] Successfully downloaded all kic artifacts
	I0819 17:52:43.944395  435600 start.go:360] acquireMachinesLock for addons-778133: {Name:mk95a2ebd9f8fd65d585e6bdd4fe86a3f12663b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:52:43.944894  435600 start.go:364] duration metric: took 474.221µs to acquireMachinesLock for "addons-778133"
	I0819 17:52:43.944931  435600 start.go:93] Provisioning new machine with config: &{Name:addons-778133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:52:43.945028  435600 start.go:125] createHost starting for "" (driver="docker")
	I0819 17:52:43.946461  435600 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 17:52:43.946683  435600 start.go:159] libmachine.API.Create for "addons-778133" (driver="docker")
	I0819 17:52:43.946714  435600 client.go:168] LocalClient.Create starting
	I0819 17:52:43.946799  435600 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem
	I0819 17:52:44.287860  435600 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/cert.pem
	I0819 17:52:44.740254  435600 cli_runner.go:164] Run: docker network inspect addons-778133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 17:52:44.755335  435600 cli_runner.go:211] docker network inspect addons-778133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 17:52:44.755429  435600 network_create.go:284] running [docker network inspect addons-778133] to gather additional debugging logs...
	I0819 17:52:44.755449  435600 cli_runner.go:164] Run: docker network inspect addons-778133
	W0819 17:52:44.768582  435600 cli_runner.go:211] docker network inspect addons-778133 returned with exit code 1
	I0819 17:52:44.768613  435600 network_create.go:287] error running [docker network inspect addons-778133]: docker network inspect addons-778133: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-778133 not found
	I0819 17:52:44.768626  435600 network_create.go:289] output of [docker network inspect addons-778133]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-778133 not found
	
	** /stderr **
	I0819 17:52:44.768729  435600 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:52:44.784048  435600 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a51e70}
	I0819 17:52:44.784084  435600 network_create.go:124] attempt to create docker network addons-778133 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 17:52:44.784153  435600 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-778133 addons-778133
	I0819 17:52:44.848141  435600 network_create.go:108] docker network addons-778133 192.168.49.0/24 created
	I0819 17:52:44.848173  435600 kic.go:121] calculated static IP "192.168.49.2" for the "addons-778133" container
	I0819 17:52:44.848292  435600 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 17:52:44.862848  435600 cli_runner.go:164] Run: docker volume create addons-778133 --label name.minikube.sigs.k8s.io=addons-778133 --label created_by.minikube.sigs.k8s.io=true
	I0819 17:52:44.879737  435600 oci.go:103] Successfully created a docker volume addons-778133
	I0819 17:52:44.879833  435600 cli_runner.go:164] Run: docker run --rm --name addons-778133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-778133 --entrypoint /usr/bin/test -v addons-778133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -d /var/lib
	I0819 17:52:46.855852  435600 cli_runner.go:217] Completed: docker run --rm --name addons-778133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-778133 --entrypoint /usr/bin/test -v addons-778133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -d /var/lib: (1.975983619s)
	I0819 17:52:46.855884  435600 oci.go:107] Successfully prepared a docker volume addons-778133
	I0819 17:52:46.855926  435600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:52:46.855958  435600 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 17:52:46.856036  435600 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-778133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 17:52:50.888577  435600 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-778133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -I lz4 -xf /preloaded.tar -C /extractDir: (4.032490312s)
	I0819 17:52:50.888615  435600 kic.go:203] duration metric: took 4.03265359s to extract preloaded images to volume ...
	W0819 17:52:50.888756  435600 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 17:52:50.888871  435600 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 17:52:50.940133  435600 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-778133 --name addons-778133 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-778133 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-778133 --network addons-778133 --ip 192.168.49.2 --volume addons-778133:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b
	I0819 17:52:51.242166  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Running}}
	I0819 17:52:51.262869  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:52:51.286367  435600 cli_runner.go:164] Run: docker exec addons-778133 stat /var/lib/dpkg/alternatives/iptables
	I0819 17:52:51.352160  435600 oci.go:144] the created container "addons-778133" has a running status.
	I0819 17:52:51.352192  435600 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa...
	I0819 17:52:52.310936  435600 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 17:52:52.336037  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:52:52.356452  435600 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 17:52:52.356474  435600 kic_runner.go:114] Args: [docker exec --privileged addons-778133 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 17:52:52.415476  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:52:52.431680  435600 machine.go:93] provisionDockerMachine start ...
	I0819 17:52:52.431774  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:52.448071  435600 main.go:141] libmachine: Using SSH client type: native
	I0819 17:52:52.448429  435600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33166 <nil> <nil>}
	I0819 17:52:52.448448  435600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:52:52.579462  435600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-778133
	
	I0819 17:52:52.579487  435600 ubuntu.go:169] provisioning hostname "addons-778133"
	I0819 17:52:52.579552  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:52.594983  435600 main.go:141] libmachine: Using SSH client type: native
	I0819 17:52:52.595227  435600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33166 <nil> <nil>}
	I0819 17:52:52.595239  435600 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-778133 && echo "addons-778133" | sudo tee /etc/hostname
	I0819 17:52:52.741356  435600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-778133
	
	I0819 17:52:52.741443  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:52.759487  435600 main.go:141] libmachine: Using SSH client type: native
	I0819 17:52:52.759757  435600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33166 <nil> <nil>}
	I0819 17:52:52.759782  435600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-778133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-778133/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-778133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:52:52.892460  435600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:52:52.892488  435600 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19478-429440/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-429440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-429440/.minikube}
	I0819 17:52:52.892516  435600 ubuntu.go:177] setting up certificates
	I0819 17:52:52.892527  435600 provision.go:84] configureAuth start
	I0819 17:52:52.892615  435600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-778133
	I0819 17:52:52.909464  435600 provision.go:143] copyHostCerts
	I0819 17:52:52.909566  435600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-429440/.minikube/ca.pem (1082 bytes)
	I0819 17:52:52.909719  435600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-429440/.minikube/cert.pem (1123 bytes)
	I0819 17:52:52.909828  435600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-429440/.minikube/key.pem (1679 bytes)
	I0819 17:52:52.909911  435600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-429440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca-key.pem org=jenkins.addons-778133 san=[127.0.0.1 192.168.49.2 addons-778133 localhost minikube]
	I0819 17:52:53.125810  435600 provision.go:177] copyRemoteCerts
	I0819 17:52:53.125893  435600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:52:53.125965  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.143545  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:53.237582  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:52:53.261074  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 17:52:53.284096  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:52:53.306941  435600 provision.go:87] duration metric: took 414.397023ms to configureAuth
	I0819 17:52:53.306968  435600 ubuntu.go:193] setting minikube options for container-runtime
	I0819 17:52:53.307167  435600 config.go:182] Loaded profile config "addons-778133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:52:53.307291  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.323680  435600 main.go:141] libmachine: Using SSH client type: native
	I0819 17:52:53.323918  435600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33166 <nil> <nil>}
	I0819 17:52:53.323939  435600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:52:53.568532  435600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:52:53.568620  435600 machine.go:96] duration metric: took 1.136918403s to provisionDockerMachine
	I0819 17:52:53.568645  435600 client.go:171] duration metric: took 9.621923787s to LocalClient.Create
	I0819 17:52:53.568696  435600 start.go:167] duration metric: took 9.622011472s to libmachine.API.Create "addons-778133"
	I0819 17:52:53.568728  435600 start.go:293] postStartSetup for "addons-778133" (driver="docker")
	I0819 17:52:53.568754  435600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:52:53.568842  435600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:52:53.568901  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.590467  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:53.687033  435600 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:52:53.690280  435600 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 17:52:53.690317  435600 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 17:52:53.690329  435600 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 17:52:53.690335  435600 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 17:52:53.690345  435600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-429440/.minikube/addons for local assets ...
	I0819 17:52:53.690407  435600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-429440/.minikube/files for local assets ...
	I0819 17:52:53.690434  435600 start.go:296] duration metric: took 121.685833ms for postStartSetup
	I0819 17:52:53.690744  435600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-778133
	I0819 17:52:53.707347  435600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/config.json ...
	I0819 17:52:53.707634  435600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:52:53.707685  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.723681  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:53.812652  435600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 17:52:53.816656  435600 start.go:128] duration metric: took 9.871612399s to createHost
	I0819 17:52:53.816685  435600 start.go:83] releasing machines lock for "addons-778133", held for 9.871772789s
	I0819 17:52:53.816752  435600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-778133
	I0819 17:52:53.835069  435600 ssh_runner.go:195] Run: cat /version.json
	I0819 17:52:53.835149  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.835455  435600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:52:53.835534  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:52:53.865173  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:53.873940  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:52:54.090887  435600 ssh_runner.go:195] Run: systemctl --version
	I0819 17:52:54.095407  435600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:52:54.240786  435600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 17:52:54.244829  435600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:52:54.268526  435600 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 17:52:54.268671  435600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:52:54.301504  435600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 17:52:54.301527  435600 start.go:495] detecting cgroup driver to use...
	I0819 17:52:54.301579  435600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 17:52:54.301646  435600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:52:54.317954  435600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:52:54.328974  435600 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:52:54.329036  435600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:52:54.343038  435600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:52:54.356914  435600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:52:54.437144  435600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:52:54.536059  435600 docker.go:233] disabling docker service ...
	I0819 17:52:54.536170  435600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:52:54.561009  435600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:52:54.572789  435600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:52:54.655975  435600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:52:54.740906  435600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:52:54.752669  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:52:54.768753  435600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:52:54.768844  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.778390  435600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:52:54.778455  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.788836  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.798811  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.808674  435600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:52:54.817958  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.828129  435600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.843819  435600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:52:54.853214  435600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:52:54.861864  435600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:52:54.869843  435600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:52:54.951070  435600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:52:55.078241  435600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:52:55.078367  435600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:52:55.082213  435600 start.go:563] Will wait 60s for crictl version
	I0819 17:52:55.082306  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:52:55.085961  435600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:52:55.132058  435600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 17:52:55.132216  435600 ssh_runner.go:195] Run: crio --version
	I0819 17:52:55.172806  435600 ssh_runner.go:195] Run: crio --version
	I0819 17:52:55.215621  435600 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 17:52:55.218302  435600 cli_runner.go:164] Run: docker network inspect addons-778133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:52:55.234250  435600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 17:52:55.237680  435600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:52:55.248391  435600 kubeadm.go:883] updating cluster {Name:addons-778133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:52:55.248514  435600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:52:55.248577  435600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:52:55.327711  435600 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:52:55.327732  435600 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:52:55.327785  435600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:52:55.364932  435600 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:52:55.364957  435600 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:52:55.364964  435600 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 17:52:55.365069  435600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-778133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:52:55.365153  435600 ssh_runner.go:195] Run: crio config
	I0819 17:52:55.414872  435600 cni.go:84] Creating CNI manager for ""
	I0819 17:52:55.414897  435600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:52:55.414909  435600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:52:55.414932  435600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-778133 NodeName:addons-778133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:52:55.415089  435600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-778133"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:52:55.415166  435600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:52:55.424196  435600 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:52:55.424282  435600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:52:55.432772  435600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 17:52:55.450351  435600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:52:55.468898  435600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 17:52:55.487124  435600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 17:52:55.490460  435600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:52:55.501118  435600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:52:55.587630  435600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:52:55.601693  435600 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133 for IP: 192.168.49.2
	I0819 17:52:55.601758  435600 certs.go:194] generating shared ca certs ...
	I0819 17:52:55.601790  435600 certs.go:226] acquiring lock for ca certs: {Name:mkc364a164a604cbf63463c0c33b0382c8bd91c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:55.602450  435600 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-429440/.minikube/ca.key
	I0819 17:52:55.920701  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt ...
	I0819 17:52:55.920733  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt: {Name:mk84e5bd91ccf3d6043b6e27954388f94bb2461d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:55.921341  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/ca.key ...
	I0819 17:52:55.921357  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/ca.key: {Name:mk8ba86f9bae0e688a1c6b9e22d920a748851a17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:55.922486  435600 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.key
	I0819 17:52:56.535188  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.crt ...
	I0819 17:52:56.535224  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.crt: {Name:mk6f1fc86ce7bfdf7d31502c33352c2d264f4667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:56.535401  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.key ...
	I0819 17:52:56.535416  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.key: {Name:mk377951487110cebed7bc7f6844bc68050b2a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:56.535498  435600 certs.go:256] generating profile certs ...
	I0819 17:52:56.535562  435600 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.key
	I0819 17:52:56.535583  435600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt with IP's: []
	I0819 17:52:57.225251  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt ...
	I0819 17:52:57.225284  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: {Name:mk580b58cae13ef6ef9e12b7bd4f045cb2386b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:57.225482  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.key ...
	I0819 17:52:57.225495  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.key: {Name:mk9aa22a4b7d94e3184414f40370806d2554e00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:57.225586  435600 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key.c6b921d4
	I0819 17:52:57.225606  435600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt.c6b921d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 17:52:57.778373  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt.c6b921d4 ...
	I0819 17:52:57.778405  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt.c6b921d4: {Name:mka10be691659db94ac0ae80c1c9fc1ba377b153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:57.778589  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key.c6b921d4 ...
	I0819 17:52:57.778605  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key.c6b921d4: {Name:mke58d611cea1f5604124e227ad5c804259fa988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:57.778695  435600 certs.go:381] copying /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt.c6b921d4 -> /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt
	I0819 17:52:57.778775  435600 certs.go:385] copying /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key.c6b921d4 -> /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key
	I0819 17:52:57.778828  435600 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.key
	I0819 17:52:57.778848  435600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.crt with IP's: []
	I0819 17:52:58.138260  435600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.crt ...
	I0819 17:52:58.138291  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.crt: {Name:mk3e89ea844ce45b8320e564497fe77665ea72c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:58.138891  435600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.key ...
	I0819 17:52:58.138907  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.key: {Name:mk309af0ce5fdf09daae71bd5a79b07fa68cad18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:58.139449  435600 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 17:52:58.139500  435600 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/ca.pem (1082 bytes)
	I0819 17:52:58.139530  435600 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:52:58.139557  435600 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-429440/.minikube/certs/key.pem (1679 bytes)
	I0819 17:52:58.140167  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:52:58.165186  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:52:58.190147  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:52:58.214350  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 17:52:58.238560  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 17:52:58.262270  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 17:52:58.286474  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:52:58.311178  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 17:52:58.335181  435600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:52:58.360433  435600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:52:58.378594  435600 ssh_runner.go:195] Run: openssl version
	I0819 17:52:58.384328  435600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:52:58.394731  435600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:52:58.398362  435600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:52:58.398456  435600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:52:58.405265  435600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:52:58.414661  435600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:52:58.418014  435600 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:52:58.418066  435600 kubeadm.go:392] StartCluster: {Name:addons-778133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-778133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:52:58.418148  435600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:52:58.418203  435600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:52:58.454343  435600 cri.go:89] found id: ""
	I0819 17:52:58.454413  435600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:52:58.463413  435600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:52:58.472397  435600 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 17:52:58.472516  435600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:52:58.483693  435600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:52:58.483716  435600 kubeadm.go:157] found existing configuration files:
	
	I0819 17:52:58.483779  435600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:52:58.492569  435600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:52:58.492636  435600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:52:58.501194  435600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:52:58.510115  435600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:52:58.510182  435600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:52:58.518906  435600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:52:58.527586  435600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:52:58.527670  435600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:52:58.536357  435600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:52:58.545236  435600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:52:58.545304  435600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:52:58.553990  435600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 17:52:58.590799  435600 kubeadm.go:310] W0819 17:52:58.590063    1180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:52:58.592095  435600 kubeadm.go:310] W0819 17:52:58.591472    1180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:52:58.612285  435600 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 17:52:58.684427  435600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:53:14.449959  435600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:53:14.450042  435600 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:53:14.450152  435600 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 17:53:14.450224  435600 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 17:53:14.450262  435600 kubeadm.go:310] OS: Linux
	I0819 17:53:14.450309  435600 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 17:53:14.450359  435600 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 17:53:14.450408  435600 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 17:53:14.450458  435600 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 17:53:14.450508  435600 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 17:53:14.450557  435600 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 17:53:14.450604  435600 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 17:53:14.450654  435600 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 17:53:14.450701  435600 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 17:53:14.450772  435600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:53:14.450866  435600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:53:14.450955  435600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:53:14.451017  435600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:53:14.453791  435600 out.go:235]   - Generating certificates and keys ...
	I0819 17:53:14.453880  435600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:53:14.453946  435600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:53:14.454026  435600 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:53:14.454086  435600 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:53:14.454150  435600 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:53:14.454201  435600 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:53:14.454256  435600 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:53:14.454376  435600 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-778133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:53:14.454431  435600 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:53:14.454544  435600 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-778133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:53:14.454613  435600 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:53:14.454681  435600 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:53:14.454727  435600 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:53:14.454784  435600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:53:14.454836  435600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:53:14.454893  435600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:53:14.454951  435600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:53:14.455016  435600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:53:14.455072  435600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:53:14.455157  435600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:53:14.455224  435600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:53:14.457629  435600 out.go:235]   - Booting up control plane ...
	I0819 17:53:14.457738  435600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:53:14.457832  435600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:53:14.457915  435600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:53:14.458034  435600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:53:14.458120  435600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:53:14.458161  435600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:53:14.458295  435600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:53:14.458398  435600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:53:14.458459  435600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001858221s
	I0819 17:53:14.458567  435600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:53:14.458643  435600 kubeadm.go:310] [api-check] The API server is healthy after 6.501929959s
	I0819 17:53:14.458766  435600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:53:14.458903  435600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:53:14.458972  435600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:53:14.459178  435600 kubeadm.go:310] [mark-control-plane] Marking the node addons-778133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:53:14.459245  435600 kubeadm.go:310] [bootstrap-token] Using token: a0y4tw.zf7e6vdo3kh8x28x
	I0819 17:53:14.461932  435600 out.go:235]   - Configuring RBAC rules ...
	I0819 17:53:14.462057  435600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:53:14.462164  435600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:53:14.462313  435600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:53:14.462483  435600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:53:14.462622  435600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:53:14.462723  435600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:53:14.462866  435600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:53:14.462916  435600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:53:14.462967  435600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:53:14.462976  435600 kubeadm.go:310] 
	I0819 17:53:14.463034  435600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:53:14.463041  435600 kubeadm.go:310] 
	I0819 17:53:14.463115  435600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:53:14.463123  435600 kubeadm.go:310] 
	I0819 17:53:14.463147  435600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:53:14.463207  435600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:53:14.463261  435600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:53:14.463269  435600 kubeadm.go:310] 
	I0819 17:53:14.463321  435600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:53:14.463327  435600 kubeadm.go:310] 
	I0819 17:53:14.463373  435600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:53:14.463381  435600 kubeadm.go:310] 
	I0819 17:53:14.463433  435600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:53:14.463508  435600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:53:14.463579  435600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:53:14.463587  435600 kubeadm.go:310] 
	I0819 17:53:14.463669  435600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:53:14.463745  435600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:53:14.463753  435600 kubeadm.go:310] 
	I0819 17:53:14.463834  435600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a0y4tw.zf7e6vdo3kh8x28x \
	I0819 17:53:14.463936  435600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e18b21b1696fc0b5c17033532881e73bdede18d2af0b9932aa5de205ca4b73 \
	I0819 17:53:14.463959  435600 kubeadm.go:310] 	--control-plane 
	I0819 17:53:14.463963  435600 kubeadm.go:310] 
	I0819 17:53:14.464046  435600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:53:14.464055  435600 kubeadm.go:310] 
	I0819 17:53:14.464146  435600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a0y4tw.zf7e6vdo3kh8x28x \
	I0819 17:53:14.464333  435600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0e18b21b1696fc0b5c17033532881e73bdede18d2af0b9932aa5de205ca4b73 
	I0819 17:53:14.464347  435600 cni.go:84] Creating CNI manager for ""
	I0819 17:53:14.464356  435600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:53:14.467013  435600 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 17:53:14.469705  435600 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 17:53:14.473910  435600 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 17:53:14.473930  435600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 17:53:14.492843  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 17:53:14.783597  435600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:53:14.783733  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:14.783827  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-778133 minikube.k8s.io/updated_at=2024_08_19T17_53_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=addons-778133 minikube.k8s.io/primary=true
	I0819 17:53:14.940380  435600 ops.go:34] apiserver oom_adj: -16
	I0819 17:53:14.940476  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:15.440843  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:15.941308  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:16.441481  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:16.940605  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:17.441105  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:17.940578  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:18.440999  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:18.940591  435600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:53:19.043283  435600 kubeadm.go:1113] duration metric: took 4.259596356s to wait for elevateKubeSystemPrivileges
	I0819 17:53:19.043337  435600 kubeadm.go:394] duration metric: took 20.62527546s to StartCluster
	I0819 17:53:19.043374  435600 settings.go:142] acquiring lock: {Name:mk90a62cf51d9178249af9ac62d14840346a8775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:53:19.043551  435600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 17:53:19.043988  435600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/kubeconfig: {Name:mkf3f1794a92fe24d6cafa4b1b651286dbd5b9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:53:19.044269  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:53:19.044466  435600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:53:19.044638  435600 config.go:182] Loaded profile config "addons-778133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:53:19.044686  435600 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 17:53:19.044771  435600 addons.go:69] Setting yakd=true in profile "addons-778133"
	I0819 17:53:19.044796  435600 addons.go:234] Setting addon yakd=true in "addons-778133"
	I0819 17:53:19.044822  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.045292  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.045601  435600 addons.go:69] Setting inspektor-gadget=true in profile "addons-778133"
	I0819 17:53:19.045628  435600 addons.go:234] Setting addon inspektor-gadget=true in "addons-778133"
	I0819 17:53:19.045656  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.046067  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.046398  435600 addons.go:69] Setting metrics-server=true in profile "addons-778133"
	I0819 17:53:19.046432  435600 addons.go:234] Setting addon metrics-server=true in "addons-778133"
	I0819 17:53:19.046462  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.046924  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.047055  435600 addons.go:69] Setting cloud-spanner=true in profile "addons-778133"
	I0819 17:53:19.047080  435600 addons.go:234] Setting addon cloud-spanner=true in "addons-778133"
	I0819 17:53:19.047107  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.047483  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.047955  435600 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-778133"
	I0819 17:53:19.048027  435600 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-778133"
	I0819 17:53:19.048055  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.048501  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.051145  435600 addons.go:69] Setting default-storageclass=true in profile "addons-778133"
	I0819 17:53:19.051201  435600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-778133"
	I0819 17:53:19.051569  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.061323  435600 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-778133"
	I0819 17:53:19.061371  435600 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-778133"
	I0819 17:53:19.061406  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.061839  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.073715  435600 addons.go:69] Setting gcp-auth=true in profile "addons-778133"
	I0819 17:53:19.073813  435600 mustload.go:65] Loading cluster: addons-778133
	I0819 17:53:19.074050  435600 config.go:182] Loaded profile config "addons-778133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:53:19.074431  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.076294  435600 addons.go:69] Setting registry=true in profile "addons-778133"
	I0819 17:53:19.076345  435600 addons.go:234] Setting addon registry=true in "addons-778133"
	I0819 17:53:19.076384  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.076869  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.081321  435600 addons.go:69] Setting storage-provisioner=true in profile "addons-778133"
	I0819 17:53:19.081358  435600 addons.go:234] Setting addon storage-provisioner=true in "addons-778133"
	I0819 17:53:19.081401  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.081822  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.083372  435600 addons.go:69] Setting ingress=true in profile "addons-778133"
	I0819 17:53:19.083412  435600 addons.go:234] Setting addon ingress=true in "addons-778133"
	I0819 17:53:19.083457  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.083896  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.088880  435600 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-778133"
	I0819 17:53:19.088929  435600 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-778133"
	I0819 17:53:19.089294  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.100316  435600 addons.go:69] Setting ingress-dns=true in profile "addons-778133"
	I0819 17:53:19.100366  435600 addons.go:234] Setting addon ingress-dns=true in "addons-778133"
	I0819 17:53:19.100409  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.100873  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.100316  435600 addons.go:69] Setting volcano=true in profile "addons-778133"
	I0819 17:53:19.116501  435600 addons.go:234] Setting addon volcano=true in "addons-778133"
	I0819 17:53:19.116572  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.117090  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.100330  435600 addons.go:69] Setting volumesnapshots=true in profile "addons-778133"
	I0819 17:53:19.117664  435600 addons.go:234] Setting addon volumesnapshots=true in "addons-778133"
	I0819 17:53:19.117694  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.118084  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.131019  435600 out.go:177] * Verifying Kubernetes components...
	I0819 17:53:19.133985  435600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:53:19.142243  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 17:53:19.148942  435600 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 17:53:19.151852  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 17:53:19.151917  435600 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 17:53:19.152025  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.173802  435600 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 17:53:19.182043  435600 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 17:53:19.182112  435600 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 17:53:19.182216  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.197386  435600 addons.go:234] Setting addon default-storageclass=true in "addons-778133"
	I0819 17:53:19.197441  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.197882  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.198957  435600 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 17:53:19.201610  435600 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 17:53:19.201658  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 17:53:19.201753  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.208496  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 17:53:19.211427  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 17:53:19.214150  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 17:53:19.216877  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 17:53:19.219468  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 17:53:19.219568  435600 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 17:53:19.220975  435600 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 17:53:19.230252  435600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:53:19.248325  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.250787  435600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 17:53:19.250812  435600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 17:53:19.251027  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.252131  435600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:53:19.252204  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:53:19.252737  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.267758  435600 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 17:53:19.253384  435600 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 17:53:19.253496  435600 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:53:19.268568  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 17:53:19.268749  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.297072  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 17:53:19.299632  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 17:53:19.302150  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 17:53:19.302174  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 17:53:19.302248  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.316394  435600 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:53:19.316414  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 17:53:19.316475  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	W0819 17:53:19.330995  435600 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 17:53:19.333789  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.337575  435600 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-778133"
	I0819 17:53:19.337612  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:19.338042  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:19.366042  435600 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 17:53:19.368569  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 17:53:19.368592  435600 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 17:53:19.368669  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.376012  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.380385  435600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 17:53:19.380869  435600 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 17:53:19.391356  435600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:53:19.391664  435600 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 17:53:19.391679  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 17:53:19.391746  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.407751  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 17:53:19.424343  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.424387  435600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:53:19.424401  435600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:53:19.424453  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.425312  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.425956  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.426306  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.427538  435600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:53:19.431214  435600 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:53:19.431236  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 17:53:19.431298  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.458566  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.495521  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.523666  435600 out.go:177]   - Using image docker.io/busybox:stable
	I0819 17:53:19.524827  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.536882  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.544758  435600 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 17:53:19.548569  435600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:53:19.548591  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 17:53:19.548653  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:19.550001  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.553886  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.597736  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:19.741088  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 17:53:19.741158  435600 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 17:53:19.917131  435600 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 17:53:19.917212  435600 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 17:53:19.987203  435600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:53:20.000404  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 17:53:20.000468  435600 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 17:53:20.003184  435600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 17:53:20.003246  435600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 17:53:20.012865  435600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 17:53:20.012957  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 17:53:20.082254  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:53:20.086304  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 17:53:20.086331  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 17:53:20.094614  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:53:20.101908  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 17:53:20.117266  435600 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 17:53:20.117293  435600 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 17:53:20.120030  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:53:20.120928  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:53:20.126924  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:53:20.129771  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:53:20.153471  435600 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 17:53:20.153557  435600 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 17:53:20.158099  435600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 17:53:20.158168  435600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 17:53:20.198017  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 17:53:20.198093  435600 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 17:53:20.202933  435600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 17:53:20.203001  435600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 17:53:20.244340  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 17:53:20.244414  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 17:53:20.285794  435600 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:53:20.285869  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 17:53:20.331354  435600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:53:20.331428  435600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 17:53:20.334826  435600 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 17:53:20.334903  435600 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 17:53:20.410747  435600 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:53:20.410817  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 17:53:20.429640  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 17:53:20.429717  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 17:53:20.436985  435600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 17:53:20.437066  435600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 17:53:20.482877  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:53:20.490864  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:53:20.573219  435600 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 17:53:20.573293  435600 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 17:53:20.578867  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 17:53:20.578950  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 17:53:20.626119  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 17:53:20.626192  435600 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 17:53:20.658526  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:53:20.727978  435600 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 17:53:20.728056  435600 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 17:53:20.787119  435600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 17:53:20.787195  435600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 17:53:20.853899  435600 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:53:20.853967  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 17:53:20.870139  435600 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 17:53:20.870221  435600 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 17:53:20.942636  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 17:53:20.942702  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 17:53:20.991066  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:53:21.040983  435600 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:53:21.041055  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 17:53:21.107823  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 17:53:21.107901  435600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 17:53:21.226931  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:53:21.286903  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 17:53:21.286962  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 17:53:21.498862  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 17:53:21.498935  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 17:53:21.521379  435600 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.534100507s)
	I0819 17:53:21.521623  435600 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.113846223s)
	I0819 17:53:21.521660  435600 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 17:53:21.523121  435600 node_ready.go:35] waiting up to 6m0s for node "addons-778133" to be "Ready" ...
	I0819 17:53:21.705364  435600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:53:21.705445  435600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 17:53:21.886914  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:53:22.863456  435600 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-778133" context rescaled to 1 replicas
	I0819 17:53:23.722289  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:24.865105  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.782814258s)
	I0819 17:53:24.865189  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.770549818s)
	I0819 17:53:24.865391  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.763459728s)
	I0819 17:53:25.087167  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.967097096s)
	I0819 17:53:25.087370  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.966420614s)
	I0819 17:53:26.018123  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.891119426s)
	I0819 17:53:26.018159  435600 addons.go:475] Verifying addon ingress=true in "addons-778133"
	I0819 17:53:26.018330  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.888495641s)
	I0819 17:53:26.018374  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.535426159s)
	I0819 17:53:26.018449  435600 addons.go:475] Verifying addon registry=true in "addons-778133"
	I0819 17:53:26.018651  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.527715791s)
	I0819 17:53:26.018674  435600 addons.go:475] Verifying addon metrics-server=true in "addons-778133"
	I0819 17:53:26.018712  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.360117926s)
	I0819 17:53:26.018941  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.027794646s)
	W0819 17:53:26.019196  435600 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:53:26.019221  435600 retry.go:31] will retry after 359.450556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:53:26.019008  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.792007386s)
	I0819 17:53:26.021187  435600 out.go:177] * Verifying registry addon...
	I0819 17:53:26.021187  435600 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-778133 service yakd-dashboard -n yakd-dashboard
	
	I0819 17:53:26.021302  435600 out.go:177] * Verifying ingress addon...
	I0819 17:53:26.025543  435600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 17:53:26.025559  435600 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 17:53:26.048494  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:26.068514  435600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:53:26.068592  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:26.076420  435600 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 17:53:26.076493  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:26.375689  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.488731363s)
	I0819 17:53:26.375768  435600 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-778133"
	I0819 17:53:26.378665  435600 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 17:53:26.378872  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:53:26.382308  435600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 17:53:26.395827  435600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:53:26.395895  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:26.549746  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:26.550984  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:26.872488  435600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 17:53:26.872643  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:26.887323  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:26.898899  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:27.043584  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:27.045284  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:27.159918  435600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 17:53:27.241385  435600 addons.go:234] Setting addon gcp-auth=true in "addons-778133"
	I0819 17:53:27.241486  435600 host.go:66] Checking if "addons-778133" exists ...
	I0819 17:53:27.242039  435600 cli_runner.go:164] Run: docker container inspect addons-778133 --format={{.State.Status}}
	I0819 17:53:27.277697  435600 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 17:53:27.277750  435600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-778133
	I0819 17:53:27.308407  435600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33166 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/addons-778133/id_rsa Username:docker}
	I0819 17:53:27.401676  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:27.535762  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:27.537305  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:27.626609  435600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.247696637s)
	I0819 17:53:27.629562  435600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:53:27.631923  435600 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 17:53:27.634442  435600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 17:53:27.634498  435600 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 17:53:27.660609  435600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 17:53:27.660686  435600 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 17:53:27.681354  435600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:53:27.681425  435600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 17:53:27.701091  435600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:53:27.886640  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:28.029170  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:28.036708  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:28.379604  435600 addons.go:475] Verifying addon gcp-auth=true in "addons-778133"
	I0819 17:53:28.382480  435600 out.go:177] * Verifying gcp-auth addon...
	I0819 17:53:28.387338  435600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 17:53:28.404125  435600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 17:53:28.404194  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:28.404976  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:28.528725  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:28.533955  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:28.535160  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:28.885969  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:28.890302  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:29.033784  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:29.034984  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:29.387192  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:29.397238  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:29.530735  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:29.531865  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:29.887299  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:29.891503  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:30.033548  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:30.034759  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:30.386439  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:30.390594  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:30.534667  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:30.535830  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:30.536289  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:30.886445  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:30.890362  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:31.033986  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:31.034239  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:31.385642  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:31.390600  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:31.529314  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:31.530207  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:31.886040  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:31.890434  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:32.031160  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:32.031957  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:32.386522  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:32.392165  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:32.533939  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:32.535303  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:32.886512  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:32.891194  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:33.027901  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:33.029809  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:33.030732  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:33.386855  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:33.391071  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:33.530582  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:33.531502  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:33.886457  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:33.891075  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:34.029663  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:34.030390  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:34.386158  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:34.391076  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:34.529770  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:34.532608  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:34.886304  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:34.890645  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:35.031019  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:35.031315  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:35.386068  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:35.390642  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:35.526438  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:35.528496  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:35.529518  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:35.886456  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:35.890310  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:36.030620  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:36.032243  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:36.385607  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:36.390705  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:36.529311  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:36.529774  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:36.885877  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:36.890027  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:37.031954  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:37.033327  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:37.386054  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:37.390196  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:37.526482  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:37.529072  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:37.530071  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:37.885608  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:37.890502  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:38.030196  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:38.030708  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:38.386095  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:38.390204  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:38.528405  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:38.529466  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:38.886319  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:38.890408  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:39.029467  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:39.031385  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:39.386257  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:39.390595  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:39.528145  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:39.529900  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:39.530494  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:39.886652  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:39.891224  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:40.030132  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:40.030786  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:40.385805  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:40.393076  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:40.530964  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:40.531251  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:40.885860  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:40.891089  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:41.029423  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:41.030289  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:41.385846  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:41.391134  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:41.528396  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:41.529898  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:41.885677  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:41.890399  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:42.034559  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:42.034631  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:42.035825  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:42.385988  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:42.391017  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:42.531030  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:42.531516  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:42.886444  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:42.890243  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:43.029569  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:43.030418  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:43.385872  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:43.390292  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:43.529122  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:43.530901  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:43.886493  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:43.890768  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:44.028424  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:44.029795  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:44.385710  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:44.391090  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:44.527491  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:44.530660  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:44.531671  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:44.886826  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:44.890612  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:45.032767  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:45.033689  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:45.386397  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:45.390272  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:45.529185  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:45.530347  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:45.885972  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:45.890261  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:46.031447  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:46.033083  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:46.385663  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:46.390983  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:46.528431  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:46.530192  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:46.530922  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:46.886593  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:46.891271  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:47.030394  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:47.031485  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:47.386716  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:47.393693  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:47.529837  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:47.530613  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:47.886078  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:47.890256  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:48.030396  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:48.031369  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:48.385976  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:48.390219  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:48.529242  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:48.530601  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:48.531879  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:48.886681  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:48.891028  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:49.032007  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:49.032539  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:49.386500  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:49.391291  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:49.530958  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:49.531191  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:49.888407  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:49.891996  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:50.030947  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:50.032419  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:50.386349  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:50.390730  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:50.530094  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:50.530828  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:50.886948  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:50.890952  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:51.029298  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:51.031221  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:51.032093  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:51.385750  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:51.391137  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:51.530449  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:51.532119  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:51.885652  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:51.891530  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:52.029770  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:52.030277  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:52.385806  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:52.390928  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:52.529805  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:52.530177  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:52.887071  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:52.890976  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:53.030682  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:53.031713  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:53.386869  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:53.390255  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:53.526755  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:53.529868  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:53.530950  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:53.886650  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:53.890636  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:54.030233  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:54.031409  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:54.386515  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:54.391748  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:54.528639  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:54.529866  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:54.887626  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:54.890543  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:55.030333  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:55.031616  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:55.386399  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:55.390189  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:55.527654  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:55.531378  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:55.532699  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:55.886402  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:55.890549  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:56.030521  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:56.030736  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:56.386430  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:56.390731  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:56.530780  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:56.531669  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:56.886389  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:56.890066  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:57.031150  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:57.031312  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:57.386015  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:57.390902  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:57.530529  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:57.531334  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:57.886352  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:57.889982  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:58.027266  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:53:58.030878  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:58.032479  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:58.386235  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:58.391013  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:58.529311  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:58.531055  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:58.886274  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:58.891136  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:59.028084  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:59.029128  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:59.385698  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:59.390446  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:59.529306  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:53:59.529612  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:53:59.886150  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:53:59.891111  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:00.028706  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:54:00.045985  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:00.052584  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:00.386786  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:00.391305  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:00.533179  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:00.533901  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:00.886363  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:00.890439  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:01.031040  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:01.032033  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:01.386650  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:01.390985  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:01.530049  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:01.531177  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:01.886102  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:01.892102  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:02.030254  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:02.031111  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:02.385933  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:02.390925  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:02.527196  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:54:02.529284  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:02.531808  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:02.887194  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:02.891130  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:03.029548  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:03.030561  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:03.386855  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:03.391379  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:03.530289  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:03.531087  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:03.885822  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:03.891797  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:04.030076  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:04.031154  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:04.385888  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:04.391155  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:04.529652  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:04.530576  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:04.886221  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:04.891181  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:05.029884  435600 node_ready.go:53] node "addons-778133" has status "Ready":"False"
	I0819 17:54:05.032419  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:05.032786  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:05.388844  435600 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:54:05.388870  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:05.392895  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:05.556940  435600 node_ready.go:49] node "addons-778133" has status "Ready":"True"
	I0819 17:54:05.556966  435600 node_ready.go:38] duration metric: took 44.033649296s for node "addons-778133" to be "Ready" ...
	I0819 17:54:05.556982  435600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:54:05.576864  435600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:54:05.576891  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:05.577915  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:05.609884  435600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l8nmv" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:05.890665  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:05.897733  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:06.040767  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:06.042671  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:06.389867  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:06.393316  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:06.531126  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:06.531680  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:06.887508  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:06.890619  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:07.032729  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:07.034572  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:07.117273  435600 pod_ready.go:93] pod "coredns-6f6b679f8f-l8nmv" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.117297  435600 pod_ready.go:82] duration metric: took 1.507376465s for pod "coredns-6f6b679f8f-l8nmv" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.117321  435600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.122145  435600 pod_ready.go:93] pod "etcd-addons-778133" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.122211  435600 pod_ready.go:82] duration metric: took 4.880868ms for pod "etcd-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.122240  435600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.131152  435600 pod_ready.go:93] pod "kube-apiserver-addons-778133" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.131182  435600 pod_ready.go:82] duration metric: took 8.918924ms for pod "kube-apiserver-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.131194  435600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.136343  435600 pod_ready.go:93] pod "kube-controller-manager-addons-778133" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.136368  435600 pod_ready.go:82] duration metric: took 5.165686ms for pod "kube-controller-manager-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.136383  435600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzvz5" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.143997  435600 pod_ready.go:93] pod "kube-proxy-jzvz5" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.144024  435600 pod_ready.go:82] duration metric: took 7.633643ms for pod "kube-proxy-jzvz5" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.144036  435600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.387874  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:07.390522  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:07.528898  435600 pod_ready.go:93] pod "kube-scheduler-addons-778133" in "kube-system" namespace has status "Ready":"True"
	I0819 17:54:07.528927  435600 pod_ready.go:82] duration metric: took 384.88353ms for pod "kube-scheduler-addons-778133" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.528941  435600 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace to be "Ready" ...
	I0819 17:54:07.531092  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:07.531777  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:07.887511  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:07.890489  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:08.029727  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:08.032725  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:08.389133  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:08.392289  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:08.534650  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:08.536444  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:08.889005  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:08.892092  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:09.034886  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:09.036310  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:09.389370  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:09.392378  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:09.533334  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:09.534092  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:09.539413  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:09.889410  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:09.893962  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:10.032779  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:10.037461  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:10.386743  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:10.391349  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:10.542686  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:10.547951  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:10.886798  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:10.890179  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:11.033595  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:11.038430  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:11.387587  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:11.390331  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:11.531253  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:11.531846  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:11.889109  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:11.891587  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:12.030992  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:12.047831  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:12.063237  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:12.389223  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:12.394026  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:12.534371  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:12.537987  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:12.891333  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:12.893646  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:13.030726  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:13.031421  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:13.387181  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:13.391343  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:13.531124  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:13.532276  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:13.887074  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:13.890887  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:14.034618  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:14.035146  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:14.388095  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:14.391031  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:14.532110  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:14.533195  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:14.538007  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:14.887631  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:14.890443  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:15.036048  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:15.038327  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:15.389995  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:15.394108  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:15.532081  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:15.533211  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:15.887863  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:15.891204  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:16.032615  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:16.031650  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:16.389029  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:16.392243  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:16.541460  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:16.543800  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:16.888148  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:16.891722  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:17.037586  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:17.038925  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:17.043491  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:17.402262  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:17.404319  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:17.566823  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:17.570485  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:17.964035  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:17.964907  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:18.059538  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:18.065624  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:18.398861  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:18.400576  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:18.538169  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:18.539900  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:18.892322  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:18.898838  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:19.033037  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:19.034423  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:19.399958  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:19.400626  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:19.537803  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:19.538685  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:19.541016  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:19.888552  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:19.893733  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:20.039483  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:20.039947  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:20.387868  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:20.390898  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:20.532045  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:20.532405  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:20.887113  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:20.890444  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:21.030337  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:21.031109  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:21.387927  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:21.390699  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:21.533903  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:21.535971  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:21.543839  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:21.887574  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:21.890616  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:22.030833  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:22.033251  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:22.387408  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:22.390680  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:22.530188  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:22.531835  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:22.891287  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:22.894319  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:23.031673  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:23.032596  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:23.387215  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:23.391012  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:23.530224  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:23.531692  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:23.887849  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:23.898080  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:24.032636  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:24.034418  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:24.038224  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:24.388195  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:24.392410  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:24.533325  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:24.534509  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:24.894703  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:24.896693  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:25.033221  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:25.034188  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:25.387885  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:25.391013  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:25.531310  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:25.531906  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:25.887771  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:25.890575  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:26.031739  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:26.032621  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:26.387727  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:26.398036  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:26.532121  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:26.533152  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:26.542468  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:26.887769  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:26.890700  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:27.042221  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:27.043494  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:27.387876  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:27.399106  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:27.588431  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:27.588537  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:27.890106  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:27.893305  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:28.039311  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:28.043923  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:28.389014  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:28.394724  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:28.557533  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:28.559056  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:28.590870  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:28.888085  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:28.890323  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:29.034913  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:29.036415  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:29.387026  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:29.390958  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:29.534763  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:29.536789  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:29.889296  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:29.894385  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:30.030420  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:30.034926  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:30.387186  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:30.390871  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:30.535751  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:30.536758  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:30.888196  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:30.891406  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:31.030036  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:31.032558  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:31.042930  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:31.388035  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:31.390835  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:31.532172  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:31.533884  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:31.888308  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:31.891307  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:32.035089  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:32.035716  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:32.389139  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:32.394476  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:32.532585  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:32.533707  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:32.888028  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:32.890628  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:33.030203  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:33.031231  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:33.388033  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:33.390371  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:33.530998  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:33.532009  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:33.536099  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:33.887690  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:33.890405  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:34.030999  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:34.032183  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:34.389073  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:34.393534  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:34.529853  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:34.531631  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:34.891133  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:34.893727  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:35.030814  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:35.034364  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:35.392388  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:35.395725  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:35.538183  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:35.539132  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:35.545697  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:35.888209  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:35.891180  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:36.030923  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:36.032934  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:36.387696  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:36.390412  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:36.530908  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:36.531574  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:36.888079  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:36.891361  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:37.031826  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:37.032606  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:37.390119  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:37.395426  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:37.530962  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:37.532678  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:37.887428  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:37.890537  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:38.032566  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:38.033705  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:38.037891  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:38.388483  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:38.392400  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:38.536399  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:38.539164  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:38.893679  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:38.894460  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:39.034080  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:39.049556  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:39.388692  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:39.391816  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:39.530530  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:39.532186  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:39.887189  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:39.891429  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:40.032316  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:40.032807  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:40.388478  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:40.391296  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:40.533905  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:40.536536  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:40.538180  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:40.889987  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:40.890830  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:41.033748  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:41.035225  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:41.387688  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:41.391213  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:41.532350  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:41.533865  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:41.888048  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:41.893152  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:42.030543  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:42.032108  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:42.388943  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:42.392464  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:42.530743  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:42.531440  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:42.887905  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:42.890515  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:43.029487  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:43.031617  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:43.036592  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:43.388134  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:43.391910  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:43.530657  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:43.531110  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:43.887365  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:43.891187  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:44.031670  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:44.032743  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:44.390958  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:44.393015  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:44.530037  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:44.532205  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:44.889665  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:44.892781  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:45.038014  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:45.042049  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:45.044402  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:45.387957  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:45.390543  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:45.530479  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:45.531275  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:45.903354  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:45.914574  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:46.088289  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:46.089287  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:46.387639  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:46.399011  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:46.531306  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:46.531706  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:46.888653  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:46.891407  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:47.031390  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:47.031656  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:47.387538  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:47.390752  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:47.534401  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:47.539579  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:47.543769  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:47.888729  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:47.891895  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:48.035780  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:48.037286  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:48.386934  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:48.390604  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:48.536717  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:48.539934  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:48.892047  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:48.893396  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:49.030794  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:49.032161  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:49.395425  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:49.399690  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:49.533389  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:49.534301  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:49.887771  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:49.891400  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:50.033533  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:50.033857  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:50.039912  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:50.389634  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:50.393585  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:50.542305  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:50.542889  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:50.888099  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:50.891292  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:51.032816  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:51.033527  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:51.390638  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:51.396879  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:51.531682  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:51.532782  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:51.887157  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:51.891407  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:52.029270  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:52.030754  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:52.386973  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:52.390844  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:52.533258  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:52.534970  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:52.538274  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:52.887588  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:52.893025  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:53.031856  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:53.035127  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:53.388260  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:53.391939  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:53.531627  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:53.533981  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:53.889008  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:53.895288  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:54.034065  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:54.035910  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:54.387847  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:54.391332  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:54.532247  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:54.541467  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:54.553586  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:54.889078  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:54.890937  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:55.035824  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:55.042561  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:55.387724  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:55.391090  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:55.531326  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:55.531590  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:55.889738  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:55.892865  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:56.033842  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:56.035178  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:56.387720  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:56.390879  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:56.531271  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:56.531797  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:56.887923  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:56.890468  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:57.029697  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:54:57.032343  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:57.036118  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:57.386784  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:57.392820  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:57.531227  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:57.532053  435600 kapi.go:107] duration metric: took 1m31.506512137s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 17:54:57.886836  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:57.890881  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:58.030533  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:58.389125  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:58.394836  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:58.531673  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:58.887543  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:58.890869  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:59.030434  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:59.036286  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:54:59.387527  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:59.390639  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:54:59.538024  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:54:59.888328  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:54:59.895710  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:00.050978  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:00.387498  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:00.394139  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:00.546408  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:00.894585  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:00.895240  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:01.030666  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:01.039280  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:01.393960  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:01.397669  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:01.530166  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:01.911710  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:01.919329  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:02.041970  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:02.388959  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:02.393291  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:02.531642  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:02.888583  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:02.891575  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:03.031055  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:03.388110  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:03.391010  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:03.532488  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:03.543327  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:03.888710  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:03.892951  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:04.031445  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:04.393485  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:04.397677  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:04.530905  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:04.888987  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:04.893545  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:05.032065  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:05.388527  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:05.391621  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:05.532019  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:05.887859  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:05.890939  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:06.032539  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:06.041189  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:06.388540  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:06.392489  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:06.538783  435600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:55:06.887909  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:06.890434  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:07.033311  435600 kapi.go:107] duration metric: took 1m41.007746575s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 17:55:07.389571  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:07.394721  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:07.893385  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:07.895241  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:08.389149  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:08.393891  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:08.534733  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:08.887925  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:08.890642  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:55:09.387917  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:09.390615  435600 kapi.go:107] duration metric: took 1m41.003275572s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 17:55:09.392258  435600 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-778133 cluster.
	I0819 17:55:09.394092  435600 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 17:55:09.395478  435600 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 17:55:09.888456  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:10.387801  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:10.547386  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:10.887309  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:11.393162  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:11.888338  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:12.387572  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:12.887660  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:13.035936  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:13.387923  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:13.887121  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:14.387911  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:14.887137  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:15.063120  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:15.387642  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:15.888009  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:16.387357  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:16.887197  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:17.442036  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:17.544589  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:17.888603  435600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:55:18.387107  435600 kapi.go:107] duration metric: took 1m52.004796552s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 17:55:18.388520  435600 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 17:55:18.389855  435600 addons.go:510] duration metric: took 1m59.345150953s for enable addons: enabled=[storage-provisioner cloud-spanner default-storageclass nvidia-device-plugin storage-provisioner-rancher ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 17:55:20.035985  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:22.036145  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:24.535033  435600 pod_ready.go:103] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"False"
	I0819 17:55:26.035668  435600 pod_ready.go:93] pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace has status "Ready":"True"
	I0819 17:55:26.035695  435600 pod_ready.go:82] duration metric: took 1m18.506745381s for pod "metrics-server-8988944d9-f95p9" in "kube-system" namespace to be "Ready" ...
	I0819 17:55:26.035708  435600 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jf6ms" in "kube-system" namespace to be "Ready" ...
	I0819 17:55:26.041475  435600 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-jf6ms" in "kube-system" namespace has status "Ready":"True"
	I0819 17:55:26.041501  435600 pod_ready.go:82] duration metric: took 5.784742ms for pod "nvidia-device-plugin-daemonset-jf6ms" in "kube-system" namespace to be "Ready" ...
	I0819 17:55:26.041524  435600 pod_ready.go:39] duration metric: took 1m20.48450572s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:55:26.041541  435600 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:55:26.041576  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:55:26.041643  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:55:26.094173  435600 cri.go:89] found id: "73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:26.094201  435600 cri.go:89] found id: ""
	I0819 17:55:26.094210  435600 logs.go:276] 1 containers: [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5]
	I0819 17:55:26.094266  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.097811  435600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:55:26.097914  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:55:26.145659  435600 cri.go:89] found id: "74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:26.145737  435600 cri.go:89] found id: ""
	I0819 17:55:26.145751  435600 logs.go:276] 1 containers: [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839]
	I0819 17:55:26.145818  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.149430  435600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:55:26.149507  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:55:26.193399  435600 cri.go:89] found id: "cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:26.193475  435600 cri.go:89] found id: ""
	I0819 17:55:26.193497  435600 logs.go:276] 1 containers: [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b]
	I0819 17:55:26.193588  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.198000  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:55:26.198082  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:55:26.240912  435600 cri.go:89] found id: "d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:26.240936  435600 cri.go:89] found id: ""
	I0819 17:55:26.240945  435600 logs.go:276] 1 containers: [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0]
	I0819 17:55:26.241027  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.244567  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:55:26.244642  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:55:26.285113  435600 cri.go:89] found id: "665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:26.285134  435600 cri.go:89] found id: ""
	I0819 17:55:26.285141  435600 logs.go:276] 1 containers: [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481]
	I0819 17:55:26.285221  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.288980  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:55:26.289110  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:55:26.329797  435600 cri.go:89] found id: "186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:26.329820  435600 cri.go:89] found id: ""
	I0819 17:55:26.329827  435600 logs.go:276] 1 containers: [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160]
	I0819 17:55:26.329884  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.333464  435600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:55:26.333546  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:55:26.375207  435600 cri.go:89] found id: "7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:26.375231  435600 cri.go:89] found id: ""
	I0819 17:55:26.375240  435600 logs.go:276] 1 containers: [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97]
	I0819 17:55:26.375294  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:26.379041  435600 logs.go:123] Gathering logs for kindnet [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97] ...
	I0819 17:55:26.379064  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:26.443338  435600 logs.go:123] Gathering logs for dmesg ...
	I0819 17:55:26.443790  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:55:26.462360  435600 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:55:26.462397  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:55:26.659763  435600 logs.go:123] Gathering logs for etcd [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839] ...
	I0819 17:55:26.659794  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:26.724577  435600 logs.go:123] Gathering logs for coredns [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b] ...
	I0819 17:55:26.724614  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:26.770312  435600 logs.go:123] Gathering logs for kube-scheduler [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0] ...
	I0819 17:55:26.770344  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:26.820060  435600 logs.go:123] Gathering logs for kube-proxy [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481] ...
	I0819 17:55:26.820091  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:26.858028  435600 logs.go:123] Gathering logs for kube-controller-manager [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160] ...
	I0819 17:55:26.858055  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:26.941542  435600 logs.go:123] Gathering logs for container status ...
	I0819 17:55:26.941588  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:55:26.999922  435600 logs.go:123] Gathering logs for kubelet ...
	I0819 17:55:26.999968  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:55:27.091991  435600 logs.go:123] Gathering logs for kube-apiserver [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5] ...
	I0819 17:55:27.092029  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:27.166929  435600 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:55:27.166960  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:55:29.765953  435600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:55:29.780182  435600 api_server.go:72] duration metric: took 2m10.735654266s to wait for apiserver process to appear ...
	I0819 17:55:29.780209  435600 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:55:29.780274  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:55:29.780333  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:55:29.818703  435600 cri.go:89] found id: "73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:29.818721  435600 cri.go:89] found id: ""
	I0819 17:55:29.818729  435600 logs.go:276] 1 containers: [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5]
	I0819 17:55:29.818784  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.822223  435600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:55:29.822298  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:55:29.860809  435600 cri.go:89] found id: "74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:29.860829  435600 cri.go:89] found id: ""
	I0819 17:55:29.860837  435600 logs.go:276] 1 containers: [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839]
	I0819 17:55:29.860893  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.864512  435600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:55:29.864598  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:55:29.907361  435600 cri.go:89] found id: "cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:29.907384  435600 cri.go:89] found id: ""
	I0819 17:55:29.907393  435600 logs.go:276] 1 containers: [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b]
	I0819 17:55:29.907450  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.911800  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:55:29.911874  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:55:29.950945  435600 cri.go:89] found id: "d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:29.951019  435600 cri.go:89] found id: ""
	I0819 17:55:29.951041  435600 logs.go:276] 1 containers: [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0]
	I0819 17:55:29.951115  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.954853  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:55:29.954951  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:55:29.993143  435600 cri.go:89] found id: "665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:29.993168  435600 cri.go:89] found id: ""
	I0819 17:55:29.993176  435600 logs.go:276] 1 containers: [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481]
	I0819 17:55:29.993268  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:29.997285  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:55:29.997413  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:55:30.052856  435600 cri.go:89] found id: "186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:30.052880  435600 cri.go:89] found id: ""
	I0819 17:55:30.052889  435600 logs.go:276] 1 containers: [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160]
	I0819 17:55:30.052976  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:30.057285  435600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:55:30.057434  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:55:30.104928  435600 cri.go:89] found id: "7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:30.105012  435600 cri.go:89] found id: ""
	I0819 17:55:30.105340  435600 logs.go:276] 1 containers: [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97]
	I0819 17:55:30.105425  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:30.110449  435600 logs.go:123] Gathering logs for etcd [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839] ...
	I0819 17:55:30.110482  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:30.170329  435600 logs.go:123] Gathering logs for kube-scheduler [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0] ...
	I0819 17:55:30.170374  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:30.233251  435600 logs.go:123] Gathering logs for kindnet [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97] ...
	I0819 17:55:30.233285  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:30.299245  435600 logs.go:123] Gathering logs for container status ...
	I0819 17:55:30.299282  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:55:30.358540  435600 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:55:30.358572  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:55:30.455689  435600 logs.go:123] Gathering logs for kubelet ...
	I0819 17:55:30.455725  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:55:30.546164  435600 logs.go:123] Gathering logs for dmesg ...
	I0819 17:55:30.546199  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:55:30.563842  435600 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:55:30.563871  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:55:30.730432  435600 logs.go:123] Gathering logs for kube-apiserver [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5] ...
	I0819 17:55:30.730468  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:30.846178  435600 logs.go:123] Gathering logs for coredns [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b] ...
	I0819 17:55:30.846218  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:30.892652  435600 logs.go:123] Gathering logs for kube-proxy [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481] ...
	I0819 17:55:30.892685  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:30.965711  435600 logs.go:123] Gathering logs for kube-controller-manager [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160] ...
	I0819 17:55:30.965755  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:33.573165  435600 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 17:55:33.581064  435600 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 17:55:33.582050  435600 api_server.go:141] control plane version: v1.31.0
	I0819 17:55:33.582077  435600 api_server.go:131] duration metric: took 3.801859602s to wait for apiserver health ...
	I0819 17:55:33.582088  435600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:55:33.582110  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:55:33.582177  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:55:33.629523  435600 cri.go:89] found id: "73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:33.629551  435600 cri.go:89] found id: ""
	I0819 17:55:33.629560  435600 logs.go:276] 1 containers: [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5]
	I0819 17:55:33.629620  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.633357  435600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:55:33.633437  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:55:33.672899  435600 cri.go:89] found id: "74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:33.672925  435600 cri.go:89] found id: ""
	I0819 17:55:33.672933  435600 logs.go:276] 1 containers: [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839]
	I0819 17:55:33.672993  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.676707  435600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:55:33.676790  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:55:33.733144  435600 cri.go:89] found id: "cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:33.733220  435600 cri.go:89] found id: ""
	I0819 17:55:33.733258  435600 logs.go:276] 1 containers: [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b]
	I0819 17:55:33.733361  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.737494  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:55:33.737566  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:55:33.778400  435600 cri.go:89] found id: "d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:33.778425  435600 cri.go:89] found id: ""
	I0819 17:55:33.778434  435600 logs.go:276] 1 containers: [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0]
	I0819 17:55:33.778489  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.782214  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:55:33.782286  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:55:33.823855  435600 cri.go:89] found id: "665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:33.823879  435600 cri.go:89] found id: ""
	I0819 17:55:33.823888  435600 logs.go:276] 1 containers: [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481]
	I0819 17:55:33.823945  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.827658  435600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:55:33.827752  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:55:33.872012  435600 cri.go:89] found id: "186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:33.872033  435600 cri.go:89] found id: ""
	I0819 17:55:33.872041  435600 logs.go:276] 1 containers: [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160]
	I0819 17:55:33.872120  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.877010  435600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:55:33.877108  435600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:55:33.923071  435600 cri.go:89] found id: "7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:33.923142  435600 cri.go:89] found id: ""
	I0819 17:55:33.923165  435600 logs.go:276] 1 containers: [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97]
	I0819 17:55:33.923255  435600 ssh_runner.go:195] Run: which crictl
	I0819 17:55:33.926912  435600 logs.go:123] Gathering logs for dmesg ...
	I0819 17:55:33.926985  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:55:33.943943  435600 logs.go:123] Gathering logs for coredns [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b] ...
	I0819 17:55:33.944015  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b"
	I0819 17:55:33.989898  435600 logs.go:123] Gathering logs for kube-scheduler [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0] ...
	I0819 17:55:33.989930  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0"
	I0819 17:55:34.045104  435600 logs.go:123] Gathering logs for container status ...
	I0819 17:55:34.045151  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:55:34.093048  435600 logs.go:123] Gathering logs for kubelet ...
	I0819 17:55:34.093080  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:55:34.179867  435600 logs.go:123] Gathering logs for kube-apiserver [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5] ...
	I0819 17:55:34.179903  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5"
	I0819 17:55:34.236800  435600 logs.go:123] Gathering logs for etcd [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839] ...
	I0819 17:55:34.236834  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839"
	I0819 17:55:34.297375  435600 logs.go:123] Gathering logs for kube-proxy [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481] ...
	I0819 17:55:34.297410  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481"
	I0819 17:55:34.338907  435600 logs.go:123] Gathering logs for kube-controller-manager [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160] ...
	I0819 17:55:34.338941  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160"
	I0819 17:55:34.420737  435600 logs.go:123] Gathering logs for kindnet [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97] ...
	I0819 17:55:34.420772  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97"
	I0819 17:55:34.480945  435600 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:55:34.480976  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:55:34.572749  435600 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:55:34.572786  435600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:55:37.229390  435600 system_pods.go:59] 18 kube-system pods found
	I0819 17:55:37.229437  435600 system_pods.go:61] "coredns-6f6b679f8f-l8nmv" [ff489ec3-aafb-48e5-8b44-b3a688cdf8f4] Running
	I0819 17:55:37.229445  435600 system_pods.go:61] "csi-hostpath-attacher-0" [978351fc-eedc-46b0-8837-0408dbfe0733] Running
	I0819 17:55:37.229450  435600 system_pods.go:61] "csi-hostpath-resizer-0" [6f90ec57-00c5-4d1b-aa5a-8ed4775b934b] Running
	I0819 17:55:37.229454  435600 system_pods.go:61] "csi-hostpathplugin-qvmqd" [a19ee3c9-56ff-43fe-81d5-14d7b24057e2] Running
	I0819 17:55:37.229458  435600 system_pods.go:61] "etcd-addons-778133" [52f3011b-a727-4704-92b2-bf4441e9d845] Running
	I0819 17:55:37.229462  435600 system_pods.go:61] "kindnet-mnkhw" [48608aa5-fb50-4961-b41f-4c6fecece03c] Running
	I0819 17:55:37.229467  435600 system_pods.go:61] "kube-apiserver-addons-778133" [054b4e48-3d18-4a58-8af9-31c4acc00c4f] Running
	I0819 17:55:37.229473  435600 system_pods.go:61] "kube-controller-manager-addons-778133" [2de63fdd-9e5e-4ddb-87b0-b089a732b85f] Running
	I0819 17:55:37.229477  435600 system_pods.go:61] "kube-ingress-dns-minikube" [e58e7c8f-b313-444b-931c-07a556978e9f] Running
	I0819 17:55:37.229481  435600 system_pods.go:61] "kube-proxy-jzvz5" [e48349fd-8601-4066-913b-aa441c366b2b] Running
	I0819 17:55:37.229492  435600 system_pods.go:61] "kube-scheduler-addons-778133" [13fc982d-7f2c-4031-879b-81b8c20005f2] Running
	I0819 17:55:37.229496  435600 system_pods.go:61] "metrics-server-8988944d9-f95p9" [01704ab9-a4d6-4222-9216-dc0418048204] Running
	I0819 17:55:37.229500  435600 system_pods.go:61] "nvidia-device-plugin-daemonset-jf6ms" [64aac524-645a-4d2f-a7f0-16e99e357126] Running
	I0819 17:55:37.229504  435600 system_pods.go:61] "registry-6fb4cdfc84-jf8nh" [615dc4af-719f-4bfd-bd2e-4fe6e87fe0dc] Running
	I0819 17:55:37.229512  435600 system_pods.go:61] "registry-proxy-srkxv" [7eaaa77e-fb85-406d-86c6-1735b5cd1aeb] Running
	I0819 17:55:37.229521  435600 system_pods.go:61] "snapshot-controller-56fcc65765-8wkps" [04123999-4603-4c7e-ad1d-4b44f5b00eee] Running
	I0819 17:55:37.229526  435600 system_pods.go:61] "snapshot-controller-56fcc65765-psg4j" [317a730b-3c4a-419a-84a1-354749d88a48] Running
	I0819 17:55:37.229529  435600 system_pods.go:61] "storage-provisioner" [e2f4308c-5eed-4a83-86eb-cc99af197a86] Running
	I0819 17:55:37.229536  435600 system_pods.go:74] duration metric: took 3.647441971s to wait for pod list to return data ...
	I0819 17:55:37.229549  435600 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:55:37.232336  435600 default_sa.go:45] found service account: "default"
	I0819 17:55:37.232363  435600 default_sa.go:55] duration metric: took 2.806305ms for default service account to be created ...
	I0819 17:55:37.232379  435600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:55:37.242413  435600 system_pods.go:86] 18 kube-system pods found
	I0819 17:55:37.242458  435600 system_pods.go:89] "coredns-6f6b679f8f-l8nmv" [ff489ec3-aafb-48e5-8b44-b3a688cdf8f4] Running
	I0819 17:55:37.242466  435600 system_pods.go:89] "csi-hostpath-attacher-0" [978351fc-eedc-46b0-8837-0408dbfe0733] Running
	I0819 17:55:37.242471  435600 system_pods.go:89] "csi-hostpath-resizer-0" [6f90ec57-00c5-4d1b-aa5a-8ed4775b934b] Running
	I0819 17:55:37.242476  435600 system_pods.go:89] "csi-hostpathplugin-qvmqd" [a19ee3c9-56ff-43fe-81d5-14d7b24057e2] Running
	I0819 17:55:37.242481  435600 system_pods.go:89] "etcd-addons-778133" [52f3011b-a727-4704-92b2-bf4441e9d845] Running
	I0819 17:55:37.242487  435600 system_pods.go:89] "kindnet-mnkhw" [48608aa5-fb50-4961-b41f-4c6fecece03c] Running
	I0819 17:55:37.242492  435600 system_pods.go:89] "kube-apiserver-addons-778133" [054b4e48-3d18-4a58-8af9-31c4acc00c4f] Running
	I0819 17:55:37.242496  435600 system_pods.go:89] "kube-controller-manager-addons-778133" [2de63fdd-9e5e-4ddb-87b0-b089a732b85f] Running
	I0819 17:55:37.242500  435600 system_pods.go:89] "kube-ingress-dns-minikube" [e58e7c8f-b313-444b-931c-07a556978e9f] Running
	I0819 17:55:37.242504  435600 system_pods.go:89] "kube-proxy-jzvz5" [e48349fd-8601-4066-913b-aa441c366b2b] Running
	I0819 17:55:37.242508  435600 system_pods.go:89] "kube-scheduler-addons-778133" [13fc982d-7f2c-4031-879b-81b8c20005f2] Running
	I0819 17:55:37.242512  435600 system_pods.go:89] "metrics-server-8988944d9-f95p9" [01704ab9-a4d6-4222-9216-dc0418048204] Running
	I0819 17:55:37.242516  435600 system_pods.go:89] "nvidia-device-plugin-daemonset-jf6ms" [64aac524-645a-4d2f-a7f0-16e99e357126] Running
	I0819 17:55:37.242521  435600 system_pods.go:89] "registry-6fb4cdfc84-jf8nh" [615dc4af-719f-4bfd-bd2e-4fe6e87fe0dc] Running
	I0819 17:55:37.242525  435600 system_pods.go:89] "registry-proxy-srkxv" [7eaaa77e-fb85-406d-86c6-1735b5cd1aeb] Running
	I0819 17:55:37.242529  435600 system_pods.go:89] "snapshot-controller-56fcc65765-8wkps" [04123999-4603-4c7e-ad1d-4b44f5b00eee] Running
	I0819 17:55:37.242533  435600 system_pods.go:89] "snapshot-controller-56fcc65765-psg4j" [317a730b-3c4a-419a-84a1-354749d88a48] Running
	I0819 17:55:37.242537  435600 system_pods.go:89] "storage-provisioner" [e2f4308c-5eed-4a83-86eb-cc99af197a86] Running
	I0819 17:55:37.242546  435600 system_pods.go:126] duration metric: took 10.159637ms to wait for k8s-apps to be running ...
	I0819 17:55:37.242553  435600 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:55:37.242614  435600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:55:37.254550  435600 system_svc.go:56] duration metric: took 11.986559ms WaitForService to wait for kubelet
	I0819 17:55:37.254582  435600 kubeadm.go:582] duration metric: took 2m18.210057398s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:55:37.254603  435600 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:55:37.258002  435600 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 17:55:37.258037  435600 node_conditions.go:123] node cpu capacity is 2
	I0819 17:55:37.258052  435600 node_conditions.go:105] duration metric: took 3.442377ms to run NodePressure ...
	I0819 17:55:37.258065  435600 start.go:241] waiting for startup goroutines ...
	I0819 17:55:37.258073  435600 start.go:246] waiting for cluster config update ...
	I0819 17:55:37.258094  435600 start.go:255] writing updated cluster config ...
	I0819 17:55:37.258394  435600 ssh_runner.go:195] Run: rm -f paused
	I0819 17:55:37.625312  435600 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:55:37.627067  435600 out.go:177] * Done! kubectl is now configured to use "addons-778133" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.648322474Z" level=info msg="Removing container: 1beb519a6558ce97bb14559938675998e76fb9d1e099d8fb0d99cbd93e1db732" id=69fa1653-2ac7-42e6-8ca9-b6056d592667 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.667461179Z" level=info msg="Removed container 1beb519a6558ce97bb14559938675998e76fb9d1e099d8fb0d99cbd93e1db732: ingress-nginx/ingress-nginx-admission-patch-hk6gt/patch" id=69fa1653-2ac7-42e6-8ca9-b6056d592667 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.669256582Z" level=info msg="Removing container: 68e92778b6e92a855be5f490ddfc927b274c11faa57e6943601cc8136469fad8" id=f45fcad8-68bc-4b31-b9ac-fc06e798cca8 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.687924534Z" level=info msg="Removed container 68e92778b6e92a855be5f490ddfc927b274c11faa57e6943601cc8136469fad8: ingress-nginx/ingress-nginx-admission-create-fptqb/create" id=f45fcad8-68bc-4b31-b9ac-fc06e798cca8 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.689858090Z" level=info msg="Stopping pod sandbox: 32c8f4d54a6d3c8a2586297154e500545cb5aa1a16aabda9a4fbc71835a3ab76" id=bfb4c29a-65b6-4cff-89e0-2ad0bacd3a8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.689906310Z" level=info msg="Stopped pod sandbox (already stopped): 32c8f4d54a6d3c8a2586297154e500545cb5aa1a16aabda9a4fbc71835a3ab76" id=bfb4c29a-65b6-4cff-89e0-2ad0bacd3a8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.690290273Z" level=info msg="Removing pod sandbox: 32c8f4d54a6d3c8a2586297154e500545cb5aa1a16aabda9a4fbc71835a3ab76" id=e35983d9-9888-4ad7-8806-2667cf90545b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.701880650Z" level=info msg="Removed pod sandbox: 32c8f4d54a6d3c8a2586297154e500545cb5aa1a16aabda9a4fbc71835a3ab76" id=e35983d9-9888-4ad7-8806-2667cf90545b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.702477372Z" level=info msg="Stopping pod sandbox: 7b023129bbdeeb5946ddfdd834998f0ee25b9bb06addf023605963380dafa054" id=18f9e91d-8a04-4550-bc4a-592b7c61b75d name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.702536915Z" level=info msg="Stopped pod sandbox (already stopped): 7b023129bbdeeb5946ddfdd834998f0ee25b9bb06addf023605963380dafa054" id=18f9e91d-8a04-4550-bc4a-592b7c61b75d name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.702889478Z" level=info msg="Removing pod sandbox: 7b023129bbdeeb5946ddfdd834998f0ee25b9bb06addf023605963380dafa054" id=6ac11b3a-a8eb-4e4a-96d5-29597c3ca095 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.716793263Z" level=info msg="Removed pod sandbox: 7b023129bbdeeb5946ddfdd834998f0ee25b9bb06addf023605963380dafa054" id=6ac11b3a-a8eb-4e4a-96d5-29597c3ca095 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.717461942Z" level=info msg="Stopping pod sandbox: 9784d5fbc2b464bb3bacbac45c5cfaa6593199cbb3de8da1bc527c2bc912cb44" id=6e568f59-37f3-409f-8702-af4627e770b2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.717619869Z" level=info msg="Stopped pod sandbox (already stopped): 9784d5fbc2b464bb3bacbac45c5cfaa6593199cbb3de8da1bc527c2bc912cb44" id=6e568f59-37f3-409f-8702-af4627e770b2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.718094036Z" level=info msg="Removing pod sandbox: 9784d5fbc2b464bb3bacbac45c5cfaa6593199cbb3de8da1bc527c2bc912cb44" id=3a657f2e-2436-4b90-a904-666f55fdeaa5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.728531721Z" level=info msg="Removed pod sandbox: 9784d5fbc2b464bb3bacbac45c5cfaa6593199cbb3de8da1bc527c2bc912cb44" id=3a657f2e-2436-4b90-a904-666f55fdeaa5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.729063174Z" level=info msg="Stopping pod sandbox: 6462e685c59d40b5c5f69485edfd00265420ee7206bb2d6b9691e53f7f958f5b" id=033a7b50-2270-4d07-9f24-c6a8ec71fc66 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.729205898Z" level=info msg="Stopped pod sandbox (already stopped): 6462e685c59d40b5c5f69485edfd00265420ee7206bb2d6b9691e53f7f958f5b" id=033a7b50-2270-4d07-9f24-c6a8ec71fc66 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.729516804Z" level=info msg="Removing pod sandbox: 6462e685c59d40b5c5f69485edfd00265420ee7206bb2d6b9691e53f7f958f5b" id=d66add26-8524-4b16-b051-f9411d2a1249 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:00:15 addons-778133 crio[961]: time="2024-08-19 18:00:15.739979424Z" level=info msg="Removed pod sandbox: 6462e685c59d40b5c5f69485edfd00265420ee7206bb2d6b9691e53f7f958f5b" id=d66add26-8524-4b16-b051-f9411d2a1249 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:03:11 addons-778133 crio[961]: time="2024-08-19 18:03:11.353741898Z" level=info msg="Stopping container: 78d4968fc5b740ea2abb68c79efd379a1e95ac87847bac1544b73905038a26e7 (timeout: 30s)" id=f64483b3-ee11-4a0e-850b-a90d647a64f2 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:03:12 addons-778133 crio[961]: time="2024-08-19 18:03:12.519544375Z" level=info msg="Stopped container 78d4968fc5b740ea2abb68c79efd379a1e95ac87847bac1544b73905038a26e7: kube-system/metrics-server-8988944d9-f95p9/metrics-server" id=f64483b3-ee11-4a0e-850b-a90d647a64f2 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:03:12 addons-778133 crio[961]: time="2024-08-19 18:03:12.520100042Z" level=info msg="Stopping pod sandbox: 2d60f28975825f614998a54ca95deefee9c97e9f0de53e1c96dc9aedfef8941c" id=80aac466-3153-441c-9692-279b049c0371 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:12 addons-778133 crio[961]: time="2024-08-19 18:03:12.520396862Z" level=info msg="Got pod network &{Name:metrics-server-8988944d9-f95p9 Namespace:kube-system ID:2d60f28975825f614998a54ca95deefee9c97e9f0de53e1c96dc9aedfef8941c UID:01704ab9-a4d6-4222-9216-dc0418048204 NetNS:/var/run/netns/e1cfc01e-c755-493c-9ff2-3ef53a1ee1fc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 19 18:03:12 addons-778133 crio[961]: time="2024-08-19 18:03:12.520553559Z" level=info msg="Deleting pod kube-system_metrics-server-8988944d9-f95p9 from CNI network \"kindnet\" (type=ptp)"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57e97b6aa75d0       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   b94cb1c6fa206       hello-world-app-55bf9c44b4-78fvr
	d0c76f8a51cff       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                         5 minutes ago       Running             nginx                     0                   697d05b4cfc6d       nginx
	96e92759b80ac       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                   5 minutes ago       Running             headlamp                  0                   21001031e7905       headlamp-57fb76fcdb-bsc82
	54874153e84f7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     7 minutes ago       Running             busybox                   0                   c8bf5dcfe3392       busybox
	78d4968fc5b74       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   2d60f28975825       metrics-server-8988944d9-f95p9
	2f59708cc8e1b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        9 minutes ago       Running             storage-provisioner       0                   fb1f3160eba87       storage-provisioner
	cb8ee644d62a0       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        9 minutes ago       Running             coredns                   0                   4f5369fda24fb       coredns-6f6b679f8f-l8nmv
	7ac2f38031322       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      9 minutes ago       Running             kindnet-cni               0                   cf9e71b77b860       kindnet-mnkhw
	665fbf835c117       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        9 minutes ago       Running             kube-proxy                0                   f84e0226d1528       kube-proxy-jzvz5
	d6d1155da1ee8       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        10 minutes ago      Running             kube-scheduler            0                   ef351c987be1c       kube-scheduler-addons-778133
	73059fa5f98e6       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        10 minutes ago      Running             kube-apiserver            0                   3ce8f38d2889f       kube-apiserver-addons-778133
	74f05f4f63420       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        10 minutes ago      Running             etcd                      0                   0c40df88d60af       etcd-addons-778133
	186afb1dba18c       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        10 minutes ago      Running             kube-controller-manager   0                   c7f30670453db       kube-controller-manager-addons-778133
	
	
	==> coredns [cb8ee644d62a0dc22bc100a5a9f35dc39caa3d549c8db687d468ced87b167c2b] <==
	[INFO] 10.244.0.14:55011 - 42576 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003109691s
	[INFO] 10.244.0.14:54826 - 37430 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000131853s
	[INFO] 10.244.0.14:54826 - 56627 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000240298s
	[INFO] 10.244.0.14:60519 - 59371 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150372s
	[INFO] 10.244.0.14:60519 - 57839 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000282045s
	[INFO] 10.244.0.14:42949 - 37512 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043363s
	[INFO] 10.244.0.14:42949 - 10122 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000234169s
	[INFO] 10.244.0.14:45041 - 51772 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056154s
	[INFO] 10.244.0.14:45041 - 59966 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004036s
	[INFO] 10.244.0.14:34910 - 26992 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00188617s
	[INFO] 10.244.0.14:34910 - 46701 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001900462s
	[INFO] 10.244.0.14:56948 - 29684 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000099461s
	[INFO] 10.244.0.14:56948 - 1782 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045513s
	[INFO] 10.244.0.20:50743 - 37083 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000219769s
	[INFO] 10.244.0.20:45557 - 33365 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143398s
	[INFO] 10.244.0.20:57368 - 60701 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157412s
	[INFO] 10.244.0.20:43160 - 41118 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142241s
	[INFO] 10.244.0.20:45603 - 19671 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127882s
	[INFO] 10.244.0.20:35560 - 60628 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118759s
	[INFO] 10.244.0.20:36075 - 42310 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002683346s
	[INFO] 10.244.0.20:37533 - 3033 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002363722s
	[INFO] 10.244.0.20:41044 - 32113 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000688879s
	[INFO] 10.244.0.20:51797 - 31819 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002364445s
	[INFO] 10.244.0.22:41573 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151611s
	[INFO] 10.244.0.22:42228 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000632175s
	
	
	==> describe nodes <==
	Name:               addons-778133
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-778133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=addons-778133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_53_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-778133
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-778133
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:03:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:00:22 +0000   Mon, 19 Aug 2024 17:53:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:00:22 +0000   Mon, 19 Aug 2024 17:53:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:00:22 +0000   Mon, 19 Aug 2024 17:53:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:00:22 +0000   Mon, 19 Aug 2024 17:54:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-778133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bfc93155ec64edc9657b547521008c5
	  System UUID:                e768685e-9a74-48fe-97d3-1ac53dac6fc4
	  Boot ID:                    b7846bbc-2ca5-4e44-8ea6-94e5c03d25fd
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  default                     hello-world-app-55bf9c44b4-78fvr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  headlamp                    headlamp-57fb76fcdb-bsc82                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-6f6b679f8f-l8nmv                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     9m54s
	  kube-system                 etcd-addons-778133                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m58s
	  kube-system                 kindnet-mnkhw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m55s
	  kube-system                 kube-apiserver-addons-778133             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 kube-controller-manager-addons-778133    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 kube-proxy-jzvz5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-scheduler-addons-778133             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9m47s              kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-778133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-778133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-778133 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m59s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m59s              kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m59s              kubelet          Node addons-778133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m59s              kubelet          Node addons-778133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m59s              kubelet          Node addons-778133 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m55s              node-controller  Node addons-778133 event: Registered Node addons-778133 in Controller
	  Normal   NodeReady                9m7s               kubelet          Node addons-778133 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 16:56] systemd-journald[216]: Failed to send stream file descriptor to service manager: Connection refused
	[Aug19 17:22] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug19 17:30] hrtimer: interrupt took 7461724 ns
	
	
	==> etcd [74f05f4f63420cff3cd62324d3d02c2a0f724ca0d45c67efb60c7152162de839] <==
	{"level":"warn","ts":"2024-08-19T17:53:22.498867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.148464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:53:22.498928Z","caller":"traceutil/trace.go:171","msg":"trace[1294876105] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:405; }","duration":"274.215548ms","start":"2024-08-19T17:53:22.224700Z","end":"2024-08-19T17:53:22.498916Z","steps":["trace[1294876105] 'agreement among raft nodes before linearized reading'  (duration: 274.119805ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:53:24.149643Z","caller":"traceutil/trace.go:171","msg":"trace[857139026] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"109.888977ms","start":"2024-08-19T17:53:24.039742Z","end":"2024-08-19T17:53:24.149631Z","steps":["trace[857139026] 'process raft request'  (duration: 72.042863ms)","trace[857139026] 'compare'  (duration: 37.016585ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:53:24.149897Z","caller":"traceutil/trace.go:171","msg":"trace[2115532947] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"109.061467ms","start":"2024-08-19T17:53:24.040826Z","end":"2024-08-19T17:53:24.149888Z","steps":["trace[2115532947] 'process raft request'  (duration: 108.17823ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:53:24.149998Z","caller":"traceutil/trace.go:171","msg":"trace[1426841752] linearizableReadLoop","detail":"{readStateIndex:492; appliedIndex:490; }","duration":"108.999807ms","start":"2024-08-19T17:53:24.040992Z","end":"2024-08-19T17:53:24.149991Z","steps":["trace[1426841752] 'read index received'  (duration: 70.722811ms)","trace[1426841752] 'applied index is now lower than readState.Index'  (duration: 38.276438ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:53:24.150053Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.04747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:53:24.237730Z","caller":"traceutil/trace.go:171","msg":"trace[1743215363] range","detail":"{range_begin:/registry/clusterrolebindings/yakd-dashboard; range_end:; response_count:0; response_revision:486; }","duration":"196.725561ms","start":"2024-08-19T17:53:24.040986Z","end":"2024-08-19T17:53:24.237712Z","steps":["trace[1743215363] 'agreement among raft nodes before linearized reading'  (duration: 109.023978ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:53:24.150067Z","caller":"traceutil/trace.go:171","msg":"trace[432888513] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"107.88784ms","start":"2024-08-19T17:53:24.041574Z","end":"2024-08-19T17:53:24.149462Z","steps":["trace[432888513] 'process raft request'  (duration: 107.58269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.228275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.188443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2024-08-19T17:53:24.253045Z","caller":"traceutil/trace.go:171","msg":"trace[257717253] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:490; }","duration":"210.964407ms","start":"2024-08-19T17:53:24.042060Z","end":"2024-08-19T17:53:24.253024Z","steps":["trace[257717253] 'agreement among raft nodes before linearized reading'  (duration: 184.309749ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.207822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:4996"}
	{"level":"info","ts":"2024-08-19T17:53:24.253505Z","caller":"traceutil/trace.go:171","msg":"trace[118409466] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:490; }","duration":"140.962573ms","start":"2024-08-19T17:53:24.112531Z","end":"2024-08-19T17:53:24.253494Z","steps":["trace[118409466] 'agreement among raft nodes before linearized reading'  (duration: 120.620824ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.470651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-778133\" ","response":"range_response_count:1 size:5738"}
	{"level":"info","ts":"2024-08-19T17:53:24.254794Z","caller":"traceutil/trace.go:171","msg":"trace[1358600118] range","detail":"{range_begin:/registry/minions/addons-778133; range_end:; response_count:1; response_revision:490; }","duration":"142.407078ms","start":"2024-08-19T17:53:24.112376Z","end":"2024-08-19T17:53:24.254783Z","steps":["trace[1358600118] 'agreement among raft nodes before linearized reading'  (duration: 121.415136ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.561644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-778133\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-08-19T17:53:24.256762Z","caller":"traceutil/trace.go:171","msg":"trace[427476606] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-778133; range_end:; response_count:1; response_revision:490; }","duration":"144.407861ms","start":"2024-08-19T17:53:24.112339Z","end":"2024-08-19T17:53:24.256747Z","steps":["trace[427476606] 'agreement among raft nodes before linearized reading'  (duration: 121.525377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.634635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:53:24.260492Z","caller":"traceutil/trace.go:171","msg":"trace[1113090609] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:490; }","duration":"148.191629ms","start":"2024-08-19T17:53:24.112289Z","end":"2024-08-19T17:53:24.260481Z","steps":["trace[1113090609] 'agreement among raft nodes before linearized reading'  (duration: 121.6262ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233950Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.720804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-08-19T17:53:24.260723Z","caller":"traceutil/trace.go:171","msg":"trace[159367268] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:490; }","duration":"209.489953ms","start":"2024-08-19T17:53:24.051226Z","end":"2024-08-19T17:53:24.260716Z","steps":["trace[159367268] 'agreement among raft nodes before linearized reading'  (duration: 182.70939ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:53:24.233971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.064928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:53:24.260849Z","caller":"traceutil/trace.go:171","msg":"trace[1871651477] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:490; }","duration":"209.939552ms","start":"2024-08-19T17:53:24.050902Z","end":"2024-08-19T17:53:24.260842Z","steps":["trace[1871651477] 'agreement among raft nodes before linearized reading'  (duration: 183.056156ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:03:08.937235Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1971}
	{"level":"info","ts":"2024-08-19T18:03:08.973087Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1971,"took":"35.25964ms","hash":658415281,"current-db-size-bytes":8376320,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":5238784,"current-db-size-in-use":"5.2 MB"}
	{"level":"info","ts":"2024-08-19T18:03:08.973139Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":658415281,"revision":1971,"compact-revision":-1}
	
	
	==> kernel <==
	 18:03:12 up  1:45,  0 users,  load average: 0.05, 0.78, 2.29
	Linux addons-778133 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7ac2f38031322d90bc8f3eb3bfa6da3f784042c58224c760cc400ec98a48dd97] <==
	I0819 18:02:05.022365       1 main.go:299] handling current node
	W0819 18:02:10.615451       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:02:10.615485       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 18:02:15.021402       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:15.021536       1 main.go:299] handling current node
	W0819 18:02:15.380817       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:02:15.380851       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 18:02:19.823510       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 18:02:19.823545       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 18:02:25.022237       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:25.022278       1 main.go:299] handling current node
	I0819 18:02:35.022223       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:35.022260       1 main.go:299] handling current node
	W0819 18:02:44.492578       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:02:44.492723       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 18:02:45.022279       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:45.022439       1 main.go:299] handling current node
	W0819 18:02:45.465865       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:02:45.465899       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 18:02:55.021581       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:55.021628       1 main.go:299] handling current node
	W0819 18:02:58.545687       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 18:02:58.545724       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 18:03:05.021975       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:03:05.022015       1 main.go:299] handling current node
	
	
	==> kube-apiserver [73059fa5f98e625222f99d1c0ccd9be487118a9eb95aaf305e8e6b790b9b63f5] <==
	I0819 17:56:37.748568       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0819 17:56:40.166950       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0819 17:56:40.178326       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0819 17:56:40.189522       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0819 17:56:55.190255       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0819 17:57:04.485630       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.485684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:57:04.506912       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.506984       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:57:04.601005       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.601130       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:57:04.602249       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.602394       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 17:57:04.720080       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 17:57:04.720200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 17:57:05.601798       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 17:57:05.721185       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 17:57:05.735480       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0819 17:57:11.428170       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.217.251"}
	I0819 17:57:30.299442       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 17:57:31.332189       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 17:57:35.931103       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 17:57:36.255080       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.23.252"}
	I0819 17:59:57.099037       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.254.203"}
	E0819 17:59:59.089258       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [186afb1dba18cee26a879963bcef9931fe756251f8f43f6c2880f3f205cf1160] <==
	W0819 18:01:06.464942       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:06.464985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:01:23.949883       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:23.949926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:01:27.643960       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:27.644000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:01:37.186667       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:37.186708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:01:39.436719       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:39.436761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:01:59.183299       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:59.183344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:02:04.675528       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:02:04.675654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:02:18.151544       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:02:18.151592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:02:18.246887       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:02:18.246941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:02:50.544337       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:02:50.544383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:03:03.202480       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:03:03.202534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:03:08.538072       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:03:08.538117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 18:03:11.323041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="8.746µs"
	
	
	==> kube-proxy [665fbf835c11771d7e3aaf8a2a1f03b04eae6f6e64c172aa757f0e9e0a91a481] <==
	I0819 17:53:23.126781       1 server_linux.go:66] "Using iptables proxy"
	I0819 17:53:25.380403       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 17:53:25.398358       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:53:25.608063       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 17:53:25.608209       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:53:25.613045       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:53:25.613675       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:53:25.613748       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:53:25.632029       1 config.go:197] "Starting service config controller"
	I0819 17:53:25.632078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:53:25.632110       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:53:25.632115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:53:25.632608       1 config.go:326] "Starting node config controller"
	I0819 17:53:25.632628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:53:25.732530       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:53:25.735776       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:53:25.735810       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d6d1155da1ee8a6b77812bf27813c6e69d21431c5746878ef19a7ff85f4a06e0] <==
	W0819 17:53:11.001559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:53:11.004107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.001588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 17:53:11.004252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.833134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:53:11.833258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.845910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:53:11.846018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.863522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:53:11.863640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:11.962266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 17:53:11.962383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.047930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:53:12.048047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.076410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:53:12.076458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.118469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:53:12.118526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.125759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:53:12.125811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.277982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:53:12.278028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:53:12.315006       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:53:12.315049       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 17:53:15.472040       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:01:44 addons-778133 kubelet[1494]: E0819 18:01:44.023907    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090504023649111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:01:44 addons-778133 kubelet[1494]: E0819 18:01:44.023988    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090504023649111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:01:54 addons-778133 kubelet[1494]: E0819 18:01:54.026910    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090514026635554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:01:54 addons-778133 kubelet[1494]: E0819 18:01:54.026959    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090514026635554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:04 addons-778133 kubelet[1494]: E0819 18:02:04.030116    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090524029835894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:04 addons-778133 kubelet[1494]: E0819 18:02:04.030159    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090524029835894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:14 addons-778133 kubelet[1494]: E0819 18:02:14.032884    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090534032634765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:14 addons-778133 kubelet[1494]: E0819 18:02:14.032933    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090534032634765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:24 addons-778133 kubelet[1494]: E0819 18:02:24.036439    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090544036119124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:24 addons-778133 kubelet[1494]: E0819 18:02:24.036477    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090544036119124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:32 addons-778133 kubelet[1494]: I0819 18:02:32.726541    1494 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 18:02:34 addons-778133 kubelet[1494]: E0819 18:02:34.039314    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090554039054649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:34 addons-778133 kubelet[1494]: E0819 18:02:34.039351    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090554039054649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:44 addons-778133 kubelet[1494]: E0819 18:02:44.042562    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090564042317521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:44 addons-778133 kubelet[1494]: E0819 18:02:44.042602    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090564042317521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:54 addons-778133 kubelet[1494]: E0819 18:02:54.045496    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090574045188202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:54 addons-778133 kubelet[1494]: E0819 18:02:54.045534    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090574045188202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:04 addons-778133 kubelet[1494]: E0819 18:03:04.048176    1494 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090584047917465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:04 addons-778133 kubelet[1494]: E0819 18:03:04.048218    1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090584047917465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:12 addons-778133 kubelet[1494]: I0819 18:03:12.645743    1494 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrk4k\" (UniqueName: \"kubernetes.io/projected/01704ab9-a4d6-4222-9216-dc0418048204-kube-api-access-zrk4k\") pod \"01704ab9-a4d6-4222-9216-dc0418048204\" (UID: \"01704ab9-a4d6-4222-9216-dc0418048204\") "
	Aug 19 18:03:12 addons-778133 kubelet[1494]: I0819 18:03:12.645815    1494 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/01704ab9-a4d6-4222-9216-dc0418048204-tmp-dir\") pod \"01704ab9-a4d6-4222-9216-dc0418048204\" (UID: \"01704ab9-a4d6-4222-9216-dc0418048204\") "
	Aug 19 18:03:12 addons-778133 kubelet[1494]: I0819 18:03:12.646149    1494 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01704ab9-a4d6-4222-9216-dc0418048204-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "01704ab9-a4d6-4222-9216-dc0418048204" (UID: "01704ab9-a4d6-4222-9216-dc0418048204"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 18:03:12 addons-778133 kubelet[1494]: I0819 18:03:12.651849    1494 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01704ab9-a4d6-4222-9216-dc0418048204-kube-api-access-zrk4k" (OuterVolumeSpecName: "kube-api-access-zrk4k") pod "01704ab9-a4d6-4222-9216-dc0418048204" (UID: "01704ab9-a4d6-4222-9216-dc0418048204"). InnerVolumeSpecName "kube-api-access-zrk4k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 18:03:12 addons-778133 kubelet[1494]: I0819 18:03:12.746799    1494 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/01704ab9-a4d6-4222-9216-dc0418048204-tmp-dir\") on node \"addons-778133\" DevicePath \"\""
	Aug 19 18:03:12 addons-778133 kubelet[1494]: I0819 18:03:12.746834    1494 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zrk4k\" (UniqueName: \"kubernetes.io/projected/01704ab9-a4d6-4222-9216-dc0418048204-kube-api-access-zrk4k\") on node \"addons-778133\" DevicePath \"\""
	
	
	==> storage-provisioner [2f59708cc8e1b0b8e7dfd0401a210142e0eed0afc80bb2a9f073bd6240219ca3] <==
	I0819 17:54:06.393834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 17:54:06.405817       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 17:54:06.406877       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 17:54:06.419929       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 17:54:06.420156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-778133_483e0bf7-169c-4c08-80c6-1a281c4de92b!
	I0819 17:54:06.420829       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00224464-263e-42b7-bd36-2bcb2ab3a0ec", APIVersion:"v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-778133_483e0bf7-169c-4c08-80c6-1a281c4de92b became leader
	I0819 17:54:06.521882       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-778133_483e0bf7-169c-4c08-80c6-1a281c4de92b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-778133 -n addons-778133
helpers_test.go:261: (dbg) Run:  kubectl --context addons-778133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (351.19s)

                                                
                                    

Test pass (296/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.73
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 6.46
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 190.57
31 TestAddons/serial/GCPAuth/Namespaces 0.2
33 TestAddons/parallel/Registry 15.01
35 TestAddons/parallel/InspektorGadget 11.8
39 TestAddons/parallel/CSI 68.72
40 TestAddons/parallel/Headlamp 13.21
41 TestAddons/parallel/CloudSpanner 5.6
42 TestAddons/parallel/LocalPath 53.8
43 TestAddons/parallel/NvidiaDevicePlugin 5.86
44 TestAddons/parallel/Yakd 11.8
45 TestAddons/StoppedEnableDisable 12.2
46 TestCertOptions 39.53
47 TestCertExpiration 245.16
49 TestForceSystemdFlag 38.9
50 TestForceSystemdEnv 42.14
56 TestErrorSpam/setup 29.88
57 TestErrorSpam/start 0.76
58 TestErrorSpam/status 1.05
59 TestErrorSpam/pause 1.87
60 TestErrorSpam/unpause 1.81
61 TestErrorSpam/stop 1.43
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 54.76
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.49
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.61
73 TestFunctional/serial/CacheCmd/cache/add_local 1.28
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 38.2
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.76
84 TestFunctional/serial/LogsFileCmd 1.74
85 TestFunctional/serial/InvalidService 4.55
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 10.49
89 TestFunctional/parallel/DryRun 0.55
90 TestFunctional/parallel/InternationalLanguage 0.22
91 TestFunctional/parallel/StatusCmd 1.01
95 TestFunctional/parallel/ServiceCmdConnect 8.59
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 23.83
99 TestFunctional/parallel/SSHCmd 0.54
100 TestFunctional/parallel/CpCmd 2.08
102 TestFunctional/parallel/FileSync 0.31
103 TestFunctional/parallel/CertSync 2.2
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
111 TestFunctional/parallel/License 0.26
112 TestFunctional/parallel/Version/short 0.06
113 TestFunctional/parallel/Version/components 1.16
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.5
119 TestFunctional/parallel/ImageCommands/Setup 0.75
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.57
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.15
125 TestFunctional/parallel/ServiceCmd/DeployApp 10.28
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.12
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.36
136 TestFunctional/parallel/ServiceCmd/List 0.34
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
139 TestFunctional/parallel/ServiceCmd/Format 0.36
140 TestFunctional/parallel/ServiceCmd/URL 0.38
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
148 TestFunctional/parallel/ProfileCmd/profile_list 0.38
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
150 TestFunctional/parallel/MountCmd/any-port 7.82
151 TestFunctional/parallel/MountCmd/specific-port 2.04
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.29
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 175.45
160 TestMultiControlPlane/serial/DeployApp 7.58
161 TestMultiControlPlane/serial/PingHostFromPods 1.54
162 TestMultiControlPlane/serial/AddWorkerNode 39.2
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
165 TestMultiControlPlane/serial/CopyFile 18.65
166 TestMultiControlPlane/serial/StopSecondaryNode 12.73
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 24.34
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.88
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 174.91
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.76
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
173 TestMultiControlPlane/serial/StopCluster 35.85
174 TestMultiControlPlane/serial/RestartCluster 109.9
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
176 TestMultiControlPlane/serial/AddSecondaryNode 74.73
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
181 TestJSONOutput/start/Command 47.85
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.84
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
206 TestKicCustomNetwork/create_custom_network 40.83
207 TestKicCustomNetwork/use_default_bridge_network 35.69
208 TestKicExistingNetwork 32.9
209 TestKicCustomSubnet 33.7
210 TestKicStaticIP 34.26
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 70.06
215 TestMountStart/serial/StartWithMountFirst 6.47
216 TestMountStart/serial/VerifyMountFirst 0.27
217 TestMountStart/serial/StartWithMountSecond 6.34
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 8.02
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 78.78
227 TestMultiNode/serial/DeployApp2Nodes 4.63
228 TestMultiNode/serial/PingHostFrom2Pods 1.01
229 TestMultiNode/serial/AddNode 30.38
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.33
232 TestMultiNode/serial/CopyFile 10.03
233 TestMultiNode/serial/StopNode 2.45
234 TestMultiNode/serial/StartAfterStop 9.94
235 TestMultiNode/serial/RestartKeepsNodes 116.34
236 TestMultiNode/serial/DeleteNode 5.72
237 TestMultiNode/serial/StopMultiNode 23.8
238 TestMultiNode/serial/RestartMultiNode 55.21
239 TestMultiNode/serial/ValidateNameConflict 32.86
244 TestPreload 128.61
246 TestScheduledStopUnix 108.05
249 TestInsufficientStorage 12.94
250 TestRunningBinaryUpgrade 79.89
252 TestKubernetesUpgrade 390.85
253 TestMissingContainerUpgrade 152.11
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 39.05
257 TestNoKubernetes/serial/StartWithStopK8s 16.26
258 TestNoKubernetes/serial/Start 10.75
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
260 TestNoKubernetes/serial/ProfileList 5.31
261 TestNoKubernetes/serial/Stop 1.3
262 TestNoKubernetes/serial/StartNoArgs 7.84
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
264 TestStoppedBinaryUpgrade/Setup 0.81
265 TestStoppedBinaryUpgrade/Upgrade 75.33
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
275 TestPause/serial/Start 54.47
276 TestPause/serial/SecondStartNoReconfiguration 36.36
277 TestPause/serial/Pause 0.89
278 TestPause/serial/VerifyStatus 0.44
279 TestPause/serial/Unpause 0.68
280 TestPause/serial/PauseAgain 0.84
281 TestPause/serial/DeletePaused 2.45
282 TestPause/serial/VerifyDeletedResources 0.35
290 TestNetworkPlugins/group/false 3.59
295 TestStartStop/group/old-k8s-version/serial/FirstStart 150.94
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.91
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
298 TestStartStop/group/old-k8s-version/serial/Stop 12.04
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
300 TestStartStop/group/old-k8s-version/serial/SecondStart 136.75
302 TestStartStop/group/embed-certs/serial/FirstStart 57.93
303 TestStartStop/group/embed-certs/serial/DeployApp 9.43
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
305 TestStartStop/group/embed-certs/serial/Stop 11.94
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/embed-certs/serial/SecondStart 276.95
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
311 TestStartStop/group/old-k8s-version/serial/Pause 3.03
313 TestStartStop/group/no-preload/serial/FirstStart 68.01
314 TestStartStop/group/no-preload/serial/DeployApp 9.33
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
316 TestStartStop/group/no-preload/serial/Stop 11.89
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
318 TestStartStop/group/no-preload/serial/SecondStart 302.15
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
322 TestStartStop/group/embed-certs/serial/Pause 3
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.3
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.92
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.65
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/no-preload/serial/Pause 3.09
335 TestStartStop/group/newest-cni/serial/FirstStart 39.1
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.24
338 TestStartStop/group/newest-cni/serial/Stop 1.24
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
340 TestStartStop/group/newest-cni/serial/SecondStart 15.38
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
344 TestStartStop/group/newest-cni/serial/Pause 3.16
345 TestNetworkPlugins/group/kindnet/Start 51
346 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
348 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
349 TestNetworkPlugins/group/kindnet/DNS 0.18
350 TestNetworkPlugins/group/kindnet/Localhost 0.15
351 TestNetworkPlugins/group/kindnet/HairPin 0.15
352 TestNetworkPlugins/group/auto/Start 57.11
353 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.8
357 TestNetworkPlugins/group/flannel/Start 49.93
358 TestNetworkPlugins/group/auto/KubeletFlags 0.36
359 TestNetworkPlugins/group/auto/NetCatPod 13.32
360 TestNetworkPlugins/group/auto/DNS 0.19
361 TestNetworkPlugins/group/auto/Localhost 0.17
362 TestNetworkPlugins/group/auto/HairPin 0.18
363 TestNetworkPlugins/group/flannel/ControllerPod 6.01
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
365 TestNetworkPlugins/group/flannel/NetCatPod 13.35
366 TestNetworkPlugins/group/enable-default-cni/Start 79.8
367 TestNetworkPlugins/group/flannel/DNS 0.24
368 TestNetworkPlugins/group/flannel/Localhost 0.18
369 TestNetworkPlugins/group/flannel/HairPin 0.2
370 TestNetworkPlugins/group/bridge/Start 75.64
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
373 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
374 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
375 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
377 TestNetworkPlugins/group/bridge/NetCatPod 13.33
378 TestNetworkPlugins/group/calico/Start 64.46
379 TestNetworkPlugins/group/bridge/DNS 0.21
380 TestNetworkPlugins/group/bridge/Localhost 0.21
381 TestNetworkPlugins/group/bridge/HairPin 0.19
382 TestNetworkPlugins/group/custom-flannel/Start 60.15
383 TestNetworkPlugins/group/calico/ControllerPod 6.01
384 TestNetworkPlugins/group/calico/KubeletFlags 0.33
385 TestNetworkPlugins/group/calico/NetCatPod 10.46
386 TestNetworkPlugins/group/calico/DNS 0.23
387 TestNetworkPlugins/group/calico/Localhost 0.17
388 TestNetworkPlugins/group/calico/HairPin 0.16
389 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
390 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.45
391 TestNetworkPlugins/group/custom-flannel/DNS 0.23
392 TestNetworkPlugins/group/custom-flannel/Localhost 0.32
393 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
x
+
TestDownloadOnly/v1.20.0/json-events (10.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-588340 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-588340 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.725799065s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-588340
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-588340: exit status 85 (63.901957ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-588340 | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |          |
	|         | -p download-only-588340        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:52:07
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:52:07.718135  434833 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:52:07.718289  434833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:52:07.718300  434833 out.go:358] Setting ErrFile to fd 2...
	I0819 17:52:07.718306  434833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:52:07.719157  434833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	W0819 17:52:07.719369  434833 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19478-429440/.minikube/config/config.json: open /home/jenkins/minikube-integration/19478-429440/.minikube/config/config.json: no such file or directory
	I0819 17:52:07.719868  434833 out.go:352] Setting JSON to true
	I0819 17:52:07.720875  434833 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5675,"bootTime":1724084253,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 17:52:07.720981  434833 start.go:139] virtualization:  
	I0819 17:52:07.724972  434833 out.go:97] [download-only-588340] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0819 17:52:07.725201  434833 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 17:52:07.725244  434833 notify.go:220] Checking for updates...
	I0819 17:52:07.728161  434833 out.go:169] MINIKUBE_LOCATION=19478
	I0819 17:52:07.731019  434833 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:52:07.733748  434833 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 17:52:07.736301  434833 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	I0819 17:52:07.738888  434833 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 17:52:07.744035  434833 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 17:52:07.744353  434833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:52:07.765856  434833 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:52:07.765957  434833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:52:07.828162  434833 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 17:52:07.818565981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:52:07.828325  434833 docker.go:307] overlay module found
	I0819 17:52:07.831522  434833 out.go:97] Using the docker driver based on user configuration
	I0819 17:52:07.831557  434833 start.go:297] selected driver: docker
	I0819 17:52:07.831569  434833 start.go:901] validating driver "docker" against <nil>
	I0819 17:52:07.831685  434833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:52:07.888959  434833 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 17:52:07.879702705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:52:07.889130  434833 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:52:07.889388  434833 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 17:52:07.889545  434833 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:52:07.892355  434833 out.go:169] Using Docker driver with root privileges
	I0819 17:52:07.894833  434833 cni.go:84] Creating CNI manager for ""
	I0819 17:52:07.894859  434833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:52:07.894871  434833 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:52:07.894955  434833 start.go:340] cluster config:
	{Name:download-only-588340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-588340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:52:07.897553  434833 out.go:97] Starting "download-only-588340" primary control-plane node in "download-only-588340" cluster
	I0819 17:52:07.897576  434833 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 17:52:07.900250  434833 out.go:97] Pulling base image v0.0.44-1724062045-19478 ...
	I0819 17:52:07.900276  434833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:52:07.900424  434833 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local docker daemon
	I0819 17:52:07.919019  434833 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:52:07.919608  434833 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory
	I0819 17:52:07.919712  434833 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:52:07.966392  434833 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0819 17:52:07.966432  434833 cache.go:56] Caching tarball of preloaded images
	I0819 17:52:07.966618  434833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:52:07.969541  434833 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 17:52:07.969571  434833 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0819 17:52:08.061166  434833 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0819 17:52:15.691966  434833 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0819 17:52:15.692151  434833 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0819 17:52:16.017127  434833 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b as a tarball
	I0819 17:52:16.817883  434833 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 17:52:16.818296  434833 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/download-only-588340/config.json ...
	I0819 17:52:16.818334  434833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/download-only-588340/config.json: {Name:mka6245b819e33ec40f7b0735c93a84d6029c114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:52:16.818963  434833 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:52:16.819955  434833 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19478-429440/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-588340 host does not exist
	  To start a cluster, run: "minikube start -p download-only-588340"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-588340
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-198345 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-198345 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.459630418s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-198345
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-198345: exit status 85 (61.064885ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-588340 | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | -p download-only-588340        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:52 UTC |
	| delete  | -p download-only-588340        | download-only-588340 | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC | 19 Aug 24 17:52 UTC |
	| start   | -o=json --download-only        | download-only-198345 | jenkins | v1.33.1 | 19 Aug 24 17:52 UTC |                     |
	|         | -p download-only-198345        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:52:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:52:18.844432  435039 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:52:18.844634  435039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:52:18.844660  435039 out.go:358] Setting ErrFile to fd 2...
	I0819 17:52:18.844682  435039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:52:18.844960  435039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 17:52:18.845402  435039 out.go:352] Setting JSON to true
	I0819 17:52:18.846341  435039 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5686,"bootTime":1724084253,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 17:52:18.846437  435039 start.go:139] virtualization:  
	I0819 17:52:18.852256  435039 out.go:97] [download-only-198345] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 17:52:18.852506  435039 notify.go:220] Checking for updates...
	I0819 17:52:18.856357  435039 out.go:169] MINIKUBE_LOCATION=19478
	I0819 17:52:18.861858  435039 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:52:18.867090  435039 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 17:52:18.872494  435039 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	I0819 17:52:18.877875  435039 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 17:52:18.887801  435039 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 17:52:18.888079  435039 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:52:18.911973  435039 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:52:18.912086  435039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:52:18.964854  435039 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:52:18.955576608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:52:18.964972  435039 docker.go:307] overlay module found
	I0819 17:52:18.966401  435039 out.go:97] Using the docker driver based on user configuration
	I0819 17:52:18.966422  435039 start.go:297] selected driver: docker
	I0819 17:52:18.966429  435039 start.go:901] validating driver "docker" against <nil>
	I0819 17:52:18.966536  435039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:52:19.025737  435039 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:52:19.016504125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:52:19.025902  435039 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:52:19.026195  435039 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 17:52:19.026378  435039 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:52:19.028010  435039 out.go:169] Using Docker driver with root privileges
	I0819 17:52:19.029167  435039 cni.go:84] Creating CNI manager for ""
	I0819 17:52:19.029200  435039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:52:19.029212  435039 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:52:19.029297  435039 start.go:340] cluster config:
	{Name:download-only-198345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-198345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:52:19.030935  435039 out.go:97] Starting "download-only-198345" primary control-plane node in "download-only-198345" cluster
	I0819 17:52:19.030960  435039 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 17:52:19.032651  435039 out.go:97] Pulling base image v0.0.44-1724062045-19478 ...
	I0819 17:52:19.032688  435039 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:52:19.032790  435039 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local docker daemon
	I0819 17:52:19.047308  435039 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:52:19.047457  435039 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory
	I0819 17:52:19.047476  435039 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory, skipping pull
	I0819 17:52:19.047481  435039 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b exists in cache, skipping pull
	I0819 17:52:19.047489  435039 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b as a tarball
	I0819 17:52:19.094729  435039 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0819 17:52:19.094754  435039 cache.go:56] Caching tarball of preloaded images
	I0819 17:52:19.095305  435039 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:52:19.096689  435039 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 17:52:19.096710  435039 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0819 17:52:19.194010  435039 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e6af375765e1700a37be5f07489fb80e -> /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0819 17:52:23.662358  435039 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0819 17:52:23.662463  435039 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19478-429440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-198345 host does not exist
	  To start a cluster, run: "minikube start -p download-only-198345"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-198345
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-383479 --alsologtostderr --binary-mirror http://127.0.0.1:45625 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-383479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-383479
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-778133
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-778133: exit status 85 (85.990709ms)

                                                
                                                
-- stdout --
	* Profile "addons-778133" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-778133"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-778133
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-778133: exit status 85 (86.886902ms)

                                                
                                                
-- stdout --
	* Profile "addons-778133" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-778133"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (190.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-778133 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-778133 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m10.565149967s)
--- PASS: TestAddons/Setup (190.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-778133 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-778133 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.345247ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-jf8nh" [615dc4af-719f-4bfd-bd2e-4fe6e87fe0dc] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004592072s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-srkxv" [7eaaa77e-fb85-406d-86c6-1735b5cd1aeb] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003658988s
addons_test.go:342: (dbg) Run:  kubectl --context addons-778133 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-778133 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-778133 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.050012752s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6dlfh" [38b33126-22b3-4b03-b51a-04bb3d50ae93] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004650549s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-778133
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-778133: (5.790731587s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 10.053308ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-778133 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/08/19 17:56:10 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-778133 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [942a7946-33a4-441e-ba2e-ac7c5d7b3af6] Pending
helpers_test.go:344: "task-pv-pod" [942a7946-33a4-441e-ba2e-ac7c5d7b3af6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [942a7946-33a4-441e-ba2e-ac7c5d7b3af6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004887778s
addons_test.go:590: (dbg) Run:  kubectl --context addons-778133 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-778133 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-778133 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-778133 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-778133 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-778133 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-778133 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [856a4b6a-643b-4c1f-967c-92c5ea35c10a] Pending
helpers_test.go:344: "task-pv-pod-restore" [856a4b6a-643b-4c1f-967c-92c5ea35c10a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [856a4b6a-643b-4c1f-967c-92c5ea35c10a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003774337s
addons_test.go:632: (dbg) Run:  kubectl --context addons-778133 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-778133 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-778133 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-778133 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.746401558s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-778133 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-bsc82" [728756a8-ca63-47c4-b07d-ebfcf99fc154] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-bsc82" [728756a8-ca63-47c4-b07d-ebfcf99fc154] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-bsc82" [728756a8-ca63-47c4-b07d-ebfcf99fc154] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003519615s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (13.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-2fww9" [e331d81d-4f11-4ada-8980-49a23bb50bdd] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004239124s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-778133
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-778133 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-778133 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778133 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ec604134-80c2-4b70-9edf-b72bde8707b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ec604134-80c2-4b70-9edf-b72bde8707b4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ec604134-80c2-4b70-9edf-b72bde8707b4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004462684s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-778133 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 ssh "cat /opt/local-path-provisioner/pvc-de919d21-52a1-44ba-882f-4f4cb571fe76_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-778133 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-778133 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-778133 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.503758963s)
--- PASS: TestAddons/parallel/LocalPath (53.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jf6ms" [64aac524-645a-4d2f-a7f0-16e99e357126] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004675071s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-778133
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.86s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-9t4jd" [c47ff107-356a-4fda-a231-c93495f4a227] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003861416s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-778133 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-778133 addons disable yakd --alsologtostderr -v=1: (5.790046588s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-778133
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-778133: (11.931999778s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-778133
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-778133
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-778133
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (39.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-401937 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-401937 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.75678962s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-401937 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-401937 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-401937 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-401937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-401937
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-401937: (2.137628662s)
--- PASS: TestCertOptions (39.53s)

                                                
                                    
x
+
TestCertExpiration (245.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-136671 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-136671 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (36.471037471s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-136671 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-136671 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.242001932s)
helpers_test.go:175: Cleaning up "cert-expiration-136671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-136671
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-136671: (2.447550218s)
--- PASS: TestCertExpiration (245.16s)

                                                
                                    
x
+
TestForceSystemdFlag (38.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-258865 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-258865 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.554467435s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-258865 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-258865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-258865
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-258865: (2.92030789s)
--- PASS: TestForceSystemdFlag (38.90s)

                                                
                                    
x
+
TestForceSystemdEnv (42.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-864342 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-864342 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.470285682s)
helpers_test.go:175: Cleaning up "force-systemd-env-864342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-864342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-864342: (2.665981716s)
--- PASS: TestForceSystemdEnv (42.14s)

                                                
                                    
x
+
TestErrorSpam/setup (29.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-512697 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-512697 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-512697 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-512697 --driver=docker  --container-runtime=crio: (29.877994545s)
--- PASS: TestErrorSpam/setup (29.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 pause
--- PASS: TestErrorSpam/pause (1.87s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 stop: (1.237056342s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-512697 --log_dir /tmp/nospam-512697 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19478-429440/.minikube/files/etc/test/nested/copy/434827/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-993381 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-993381 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (54.757205976s)
--- PASS: TestFunctional/serial/StartWithProxy (54.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-993381 --alsologtostderr -v=8
E0819 18:05:38.149815  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:38.159162  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:38.170546  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:38.192016  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:38.233509  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:38.314972  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:38.476570  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:38.798410  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:39.440715  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:40.722113  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:05:43.284149  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-993381 --alsologtostderr -v=8: (37.482909112s)
functional_test.go:663: soft start took 37.483487238s for "functional-993381" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-993381 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 cache add registry.k8s.io/pause:3.1: (1.612552984s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cache add registry.k8s.io/pause:3.3
E0819 18:05:48.405693  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 cache add registry.k8s.io/pause:3.3: (1.683392721s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 cache add registry.k8s.io/pause:latest: (1.315970681s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-993381 /tmp/TestFunctionalserialCacheCmdcacheadd_local4263482368/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cache add minikube-local-cache-test:functional-993381
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cache delete minikube-local-cache-test:functional-993381
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-993381
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.381092ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 cache reload: (1.165615066s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 kubectl -- --context functional-993381 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-993381 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-993381 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0819 18:05:58.647504  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:06:19.128933  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-993381 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.202570322s)
functional_test.go:761: restart took 38.202669151s for "functional-993381" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-993381 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 logs: (1.762084643s)
--- PASS: TestFunctional/serial/LogsCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 logs --file /tmp/TestFunctionalserialLogsFileCmd3980270105/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 logs --file /tmp/TestFunctionalserialLogsFileCmd3980270105/001/logs.txt: (1.741385913s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-993381 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-993381
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-993381: exit status 115 (820.928857ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32251 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-993381 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 config get cpus: exit status 14 (82.801823ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 config get cpus: exit status 14 (66.301494ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-993381 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-993381 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 464317: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-993381 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-993381 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (241.802222ms)

                                                
                                                
-- stdout --
	* [functional-993381] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:07:21.737909  463524 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:07:21.738068  463524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:21.738092  463524 out.go:358] Setting ErrFile to fd 2...
	I0819 18:07:21.738104  463524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:21.738387  463524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 18:07:21.738828  463524 out.go:352] Setting JSON to false
	I0819 18:07:21.739926  463524 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6589,"bootTime":1724084253,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 18:07:21.740000  463524 start.go:139] virtualization:  
	I0819 18:07:21.741775  463524 out.go:177] * [functional-993381] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 18:07:21.743112  463524 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:07:21.743408  463524 notify.go:220] Checking for updates...
	I0819 18:07:21.745398  463524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:07:21.746426  463524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 18:07:21.747428  463524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	I0819 18:07:21.748844  463524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 18:07:21.749946  463524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:07:21.751752  463524 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:07:21.752352  463524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:07:21.787998  463524 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:07:21.788104  463524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:07:21.881571  463524 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 18:07:21.869271603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:07:21.881685  463524 docker.go:307] overlay module found
	I0819 18:07:21.883422  463524 out.go:177] * Using the docker driver based on existing profile
	I0819 18:07:21.884674  463524 start.go:297] selected driver: docker
	I0819 18:07:21.884698  463524 start.go:901] validating driver "docker" against &{Name:functional-993381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-993381 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:07:21.884822  463524 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:07:21.886564  463524 out.go:201] 
	W0819 18:07:21.889459  463524 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 18:07:21.890629  463524 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-993381 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-993381 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-993381 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (224.046571ms)

                                                
                                                
-- stdout --
	* [functional-993381] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:07:22.289732  463714 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:07:22.289866  463714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:22.289872  463714 out.go:358] Setting ErrFile to fd 2...
	I0819 18:07:22.289877  463714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:22.290225  463714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 18:07:22.291738  463714 out.go:352] Setting JSON to false
	I0819 18:07:22.292707  463714 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6589,"bootTime":1724084253,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 18:07:22.292787  463714 start.go:139] virtualization:  
	I0819 18:07:22.294570  463714 out.go:177] * [functional-993381] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0819 18:07:22.295745  463714 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:07:22.295807  463714 notify.go:220] Checking for updates...
	I0819 18:07:22.299567  463714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:07:22.300727  463714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 18:07:22.301722  463714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	I0819 18:07:22.302998  463714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 18:07:22.304111  463714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:07:22.305827  463714 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:07:22.306521  463714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:07:22.338171  463714 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:07:22.338370  463714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:07:22.419252  463714 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 18:07:22.409578215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:07:22.419365  463714 docker.go:307] overlay module found
	I0819 18:07:22.420650  463714 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 18:07:22.421717  463714 start.go:297] selected driver: docker
	I0819 18:07:22.421735  463714 start.go:901] validating driver "docker" against &{Name:functional-993381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-993381 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:07:22.421846  463714 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:07:22.423541  463714 out.go:201] 
	W0819 18:07:22.424969  463714 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 18:07:22.426443  463714 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-993381 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-993381 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-km8ck" [b401e4ef-aa37-46d5-a637-3e2838362367] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-km8ck" [b401e4ef-aa37-46d5-a637-3e2838362367] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003810853s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31342
functional_test.go:1675: http://192.168.49.2:31342: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-km8ck

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31342
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [10c32d2b-c1b6-4e8a-860e-187b42f6fd77] Running
E0819 18:07:00.090760  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005806053s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-993381 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-993381 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-993381 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-993381 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c25f0ad7-1f4b-47ca-8857-33cc4e171999] Pending
helpers_test.go:344: "sp-pod" [c25f0ad7-1f4b-47ca-8857-33cc4e171999] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c25f0ad7-1f4b-47ca-8857-33cc4e171999] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.007236383s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-993381 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-993381 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-993381 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5a34dd88-a705-4191-a572-52e58ba015af] Pending
helpers_test.go:344: "sp-pod" [5a34dd88-a705-4191-a572-52e58ba015af] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5a34dd88-a705-4191-a572-52e58ba015af] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004216433s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-993381 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh -n functional-993381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cp functional-993381:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2778141434/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh -n functional-993381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh -n functional-993381 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/434827/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo cat /etc/test/nested/copy/434827/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/434827.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo cat /etc/ssl/certs/434827.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/434827.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo cat /usr/share/ca-certificates/434827.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4348272.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo cat /etc/ssl/certs/4348272.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4348272.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo cat /usr/share/ca-certificates/4348272.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-993381 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 ssh "sudo systemctl is-active docker": exit status 1 (351.375436ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 ssh "sudo systemctl is-active containerd": exit status 1 (383.34936ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 version -o=json --components: (1.163929023s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-993381 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-993381
localhost/kicbase/echo-server:functional-993381
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-993381 image ls --format short --alsologtostderr:
I0819 18:07:27.152815  464775 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:27.152949  464775 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:27.152959  464775 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:27.152965  464775 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:27.153276  464775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
I0819 18:07:27.153967  464775 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:27.154085  464775 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:27.154673  464775 cli_runner.go:164] Run: docker container inspect functional-993381 --format={{.State.Status}}
I0819 18:07:27.180069  464775 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:27.180133  464775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-993381
I0819 18:07:27.200446  464775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33176 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/functional-993381/id_rsa Username:docker}
I0819 18:07:27.296688  464775 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-993381 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | a9dfdba8b7190 | 197MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/kicbase/echo-server           | functional-993381  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | d5e283bc63d43 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | fcb0683e6bdbd | 86.9MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | 70594c812316a | 48.4MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | fbbbd428abb4d | 67MB   |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/minikube-local-cache-test     | functional-993381  | dd2b15f4777ea | 3.33kB |
| localhost/my-image                      | functional-993381  | 646b77fdd37cb | 1.64MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | cd0f0ae0ec9e0 | 92.6MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | 71d55d66fd4ee | 95.9MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-993381 image ls --format table --alsologtostderr:
I0819 18:07:32.457980  465178 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:32.458186  465178 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:32.458214  465178 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:32.458233  465178 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:32.458533  465178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
I0819 18:07:32.459227  465178 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:32.459419  465178 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:32.460014  465178 cli_runner.go:164] Run: docker container inspect functional-993381 --format={{.State.Status}}
I0819 18:07:32.476571  465178 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:32.476621  465178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-993381
I0819 18:07:32.492988  465178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33176 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/functional-993381/id_rsa Username:docker}
I0819 18:07:32.584648  465178 ssh_runner.go:195] Run: sudo crictl images --output json
2024/08/19 18:07:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-993381 image ls --format json --alsologtostderr:
[{"id":"646b77fdd37cb2219c5263d881e0b582382ff3a0f129e1317eef97ddb81cb0c7","repoDigests":["localhost/my-image@sha256:b1362d59a117f5af8575ef873e82f2cedb7d5fe17f86de0e6306751132276fc6"],"repoTags":["localhost/my-image:functional-993381"],"size":"1640226"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"86930758"},{"id":"71d55d66fd4ee
c8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"95949719"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-
minikube/busybox:latest"],"size":"1634527"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808","registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77"],"repoTags":["registry.k8s.io/kube-scheduler
:v1.31.0"],"size":"67007814"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256
:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"f2f56ad02e37fc6dc04033dd2397bb3810ca75409fd4b5d560bf1bae4e4d3021","repoDigests":["docker.io/library/75489fa41cce7f492683d4f0441016be68ec5009303f118aaa6e2b704f555d6f-tmp@sha256:13a25473216d7d3ad778a01ae48f3dc8dae0bf0d2307c89456d6302b996508e9"],"repoTags":[],"size":"1637644"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4701792
74deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"92567005"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kin
dest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"90290738"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48397013"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7
ec3a81e14577add","docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172049"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-993381"],"size":"4788229"},{"id":"dd2b15f4777ea3492fa5da70db58a8d09db7cf0d68853461b161425116f6ae60","repoDigests":["localhost/minikube-local-cache-test@sha256:fba5f5deaa9634c801aa8ba68cda18aa6454b1daba41bb7efd60f144f955c616"],"repoTags":["localhost/minikube-local-cache-test:functional-993381"],"size":"3330"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-993381 image ls --format json --alsologtostderr:
I0819 18:07:32.238406  465147 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:32.238565  465147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:32.238578  465147 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:32.238584  465147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:32.238978  465147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
I0819 18:07:32.239770  465147 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:32.239995  465147 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:32.240587  465147 cli_runner.go:164] Run: docker container inspect functional-993381 --format={{.State.Status}}
I0819 18:07:32.257054  465147 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:32.257111  465147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-993381
I0819 18:07:32.273285  465147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33176 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/functional-993381/id_rsa Username:docker}
I0819 18:07:32.364939  465147 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-993381 image ls --format yaml --alsologtostderr:
- id: d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "90290738"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "92567005"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "48397013"
- id: dd2b15f4777ea3492fa5da70db58a8d09db7cf0d68853461b161425116f6ae60
repoDigests:
- localhost/minikube-local-cache-test@sha256:fba5f5deaa9634c801aa8ba68cda18aa6454b1daba41bb7efd60f144f955c616
repoTags:
- localhost/minikube-local-cache-test:functional-993381
size: "3330"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "86930758"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55
repoTags:
- docker.io/library/nginx:latest
size: "197172049"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-993381
size: "4788229"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "95949719"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
- registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67007814"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-993381 image ls --format yaml --alsologtostderr:
I0819 18:07:27.436025  464808 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:27.436157  464808 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:27.436162  464808 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:27.436167  464808 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:27.437000  464808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
I0819 18:07:27.438654  464808 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:27.439004  464808 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:27.439703  464808 cli_runner.go:164] Run: docker container inspect functional-993381 --format={{.State.Status}}
I0819 18:07:27.474327  464808 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:27.474389  464808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-993381
I0819 18:07:27.502343  464808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33176 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/functional-993381/id_rsa Username:docker}
I0819 18:07:27.618181  464808 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 ssh pgrep buildkitd: exit status 1 (355.288743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image build -t localhost/my-image:functional-993381 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 image build -t localhost/my-image:functional-993381 testdata/build --alsologtostderr: (3.908393178s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-993381 image build -t localhost/my-image:functional-993381 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f2f56ad02e3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-993381
--> 646b77fdd37
Successfully tagged localhost/my-image:functional-993381
646b77fdd37cb2219c5263d881e0b582382ff3a0f129e1317eef97ddb81cb0c7
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-993381 image build -t localhost/my-image:functional-993381 testdata/build --alsologtostderr:
I0819 18:07:28.123523  464900 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:28.124157  464900 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:28.124192  464900 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:28.124211  464900 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:28.124566  464900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
I0819 18:07:28.125313  464900 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:28.127339  464900 config.go:182] Loaded profile config "functional-993381": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:28.127996  464900 cli_runner.go:164] Run: docker container inspect functional-993381 --format={{.State.Status}}
I0819 18:07:28.150804  464900 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:28.150853  464900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-993381
I0819 18:07:28.192426  464900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33176 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/functional-993381/id_rsa Username:docker}
I0819 18:07:28.293882  464900 build_images.go:161] Building image from path: /tmp/build.4043899679.tar
I0819 18:07:28.293957  464900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 18:07:28.305555  464900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4043899679.tar
I0819 18:07:28.309567  464900 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4043899679.tar: stat -c "%s %y" /var/lib/minikube/build/build.4043899679.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4043899679.tar': No such file or directory
I0819 18:07:28.309608  464900 ssh_runner.go:362] scp /tmp/build.4043899679.tar --> /var/lib/minikube/build/build.4043899679.tar (3072 bytes)
I0819 18:07:28.334946  464900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4043899679
I0819 18:07:28.344594  464900 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4043899679 -xf /var/lib/minikube/build/build.4043899679.tar
I0819 18:07:28.354627  464900 crio.go:315] Building image: /var/lib/minikube/build/build.4043899679
I0819 18:07:28.354693  464900 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-993381 /var/lib/minikube/build/build.4043899679 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0819 18:07:31.931425  464900 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-993381 /var/lib/minikube/build/build.4043899679 --cgroup-manager=cgroupfs: (3.576702332s)
I0819 18:07:31.931507  464900 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4043899679
I0819 18:07:31.940114  464900 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4043899679.tar
I0819 18:07:31.948901  464900 build_images.go:217] Built localhost/my-image:functional-993381 from /tmp/build.4043899679.tar
I0819 18:07:31.948931  464900 build_images.go:133] succeeded building to: functional-993381
I0819 18:07:31.948936  464900 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-993381
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image load --daemon kicbase/echo-server:functional-993381 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 image load --daemon kicbase/echo-server:functional-993381 --alsologtostderr: (1.298277283s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image load --daemon kicbase/echo-server:functional-993381 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-993381 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-993381 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-ms7kh" [cc701287-0996-4526-83d1-1fb3fa9165ee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-ms7kh" [cc701287-0996-4526-83d1-1fb3fa9165ee] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005202192s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-993381
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image load --daemon kicbase/echo-server:functional-993381 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image save kicbase/echo-server:functional-993381 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-993381 image save kicbase/echo-server:functional-993381 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr: (2.119489182s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image rm kicbase/echo-server:functional-993381 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-993381
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 image save --daemon kicbase/echo-server:functional-993381 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-993381
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-993381 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-993381 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-993381 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-993381 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 461301: os: process already finished
helpers_test.go:508: unable to kill pid 461197: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-993381 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-993381 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [69ed30f0-2e63-4a95-b0bc-3056b7de2571] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [69ed30f0-2e63-4a95-b0bc-3056b7de2571] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003911272s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 service list -o json
functional_test.go:1494: Took "326.735077ms" to run "out/minikube-linux-arm64 -p functional-993381 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32740
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32740
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-993381 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.47.7 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-993381 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "329.570987ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "54.955073ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "343.120453ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "54.502081ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdany-port1146035823/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724090833163553126" to /tmp/TestFunctionalparallelMountCmdany-port1146035823/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724090833163553126" to /tmp/TestFunctionalparallelMountCmdany-port1146035823/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724090833163553126" to /tmp/TestFunctionalparallelMountCmdany-port1146035823/001/test-1724090833163553126
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.160441ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 18:07 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 18:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 18:07 test-1724090833163553126
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh cat /mount-9p/test-1724090833163553126
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-993381 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6b871e90-da68-4210-b53e-48d052eab087] Pending
helpers_test.go:344: "busybox-mount" [6b871e90-da68-4210-b53e-48d052eab087] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6b871e90-da68-4210-b53e-48d052eab087] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6b871e90-da68-4210-b53e-48d052eab087] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003397692s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-993381 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdany-port1146035823/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdspecific-port2617027951/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.88112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdspecific-port2617027951/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 ssh "sudo umount -f /mount-9p": exit status 1 (351.882075ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-993381 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdspecific-port2617027951/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup55501734/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup55501734/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup55501734/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T" /mount1: exit status 1 (909.794167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-993381 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-993381 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup55501734/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup55501734/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-993381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup55501734/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-993381
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-993381
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-993381
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-651966 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 18:08:22.012502  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-651966 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m54.640909575s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (175.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-651966 -- rollout status deployment/busybox: (4.585207162s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-fbbhx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-ghmnw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-tsl7q -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-fbbhx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-ghmnw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-tsl7q -- nslookup kubernetes.default
E0819 18:10:38.149052  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-fbbhx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-ghmnw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-tsl7q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-fbbhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-fbbhx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-ghmnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-ghmnw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-tsl7q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-651966 -- exec busybox-7dff88458-tsl7q -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (39.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-651966 -v=7 --alsologtostderr
E0819 18:11:05.854813  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-651966 -v=7 --alsologtostderr: (38.22058341s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (39.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-651966 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp testdata/cp-test.txt ha-651966:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3749067161/001/cp-test_ha-651966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966:/home/docker/cp-test.txt ha-651966-m02:/home/docker/cp-test_ha-651966_ha-651966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m02 "sudo cat /home/docker/cp-test_ha-651966_ha-651966-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966:/home/docker/cp-test.txt ha-651966-m03:/home/docker/cp-test_ha-651966_ha-651966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m03 "sudo cat /home/docker/cp-test_ha-651966_ha-651966-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966:/home/docker/cp-test.txt ha-651966-m04:/home/docker/cp-test_ha-651966_ha-651966-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m04 "sudo cat /home/docker/cp-test_ha-651966_ha-651966-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp testdata/cp-test.txt ha-651966-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3749067161/001/cp-test_ha-651966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m02:/home/docker/cp-test.txt ha-651966:/home/docker/cp-test_ha-651966-m02_ha-651966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966 "sudo cat /home/docker/cp-test_ha-651966-m02_ha-651966.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m02:/home/docker/cp-test.txt ha-651966-m03:/home/docker/cp-test_ha-651966-m02_ha-651966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m03 "sudo cat /home/docker/cp-test_ha-651966-m02_ha-651966-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m02:/home/docker/cp-test.txt ha-651966-m04:/home/docker/cp-test_ha-651966-m02_ha-651966-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m04 "sudo cat /home/docker/cp-test_ha-651966-m02_ha-651966-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp testdata/cp-test.txt ha-651966-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3749067161/001/cp-test_ha-651966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m03:/home/docker/cp-test.txt ha-651966:/home/docker/cp-test_ha-651966-m03_ha-651966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966 "sudo cat /home/docker/cp-test_ha-651966-m03_ha-651966.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m03:/home/docker/cp-test.txt ha-651966-m02:/home/docker/cp-test_ha-651966-m03_ha-651966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m02 "sudo cat /home/docker/cp-test_ha-651966-m03_ha-651966-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m03:/home/docker/cp-test.txt ha-651966-m04:/home/docker/cp-test_ha-651966-m03_ha-651966-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m04 "sudo cat /home/docker/cp-test_ha-651966-m03_ha-651966-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp testdata/cp-test.txt ha-651966-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3749067161/001/cp-test_ha-651966-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m04:/home/docker/cp-test.txt ha-651966:/home/docker/cp-test_ha-651966-m04_ha-651966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966 "sudo cat /home/docker/cp-test_ha-651966-m04_ha-651966.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m04:/home/docker/cp-test.txt ha-651966-m02:/home/docker/cp-test_ha-651966-m04_ha-651966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m02 "sudo cat /home/docker/cp-test_ha-651966-m04_ha-651966-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 cp ha-651966-m04:/home/docker/cp-test.txt ha-651966-m03:/home/docker/cp-test_ha-651966-m04_ha-651966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 ssh -n ha-651966-m03 "sudo cat /home/docker/cp-test_ha-651966-m04_ha-651966-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 node stop m02 -v=7 --alsologtostderr
E0819 18:11:46.042146  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:46.048579  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:46.060104  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:46.081753  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:46.123201  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:46.204632  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:46.366198  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:46.687840  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:47.329509  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:11:48.611370  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-651966 node stop m02 -v=7 --alsologtostderr: (12.000538735s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr
E0819 18:11:51.172690  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr: exit status 7 (724.69324ms)

                                                
                                                
-- stdout --
	ha-651966
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-651966-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-651966-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-651966-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:11:51.205378  480779 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:11:51.205529  480779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:11:51.205540  480779 out.go:358] Setting ErrFile to fd 2...
	I0819 18:11:51.205546  480779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:11:51.205789  480779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 18:11:51.205978  480779 out.go:352] Setting JSON to false
	I0819 18:11:51.206021  480779 mustload.go:65] Loading cluster: ha-651966
	I0819 18:11:51.206124  480779 notify.go:220] Checking for updates...
	I0819 18:11:51.206452  480779 config.go:182] Loaded profile config "ha-651966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:11:51.206465  480779 status.go:255] checking status of ha-651966 ...
	I0819 18:11:51.206956  480779 cli_runner.go:164] Run: docker container inspect ha-651966 --format={{.State.Status}}
	I0819 18:11:51.230228  480779 status.go:330] ha-651966 host status = "Running" (err=<nil>)
	I0819 18:11:51.230254  480779 host.go:66] Checking if "ha-651966" exists ...
	I0819 18:11:51.230609  480779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-651966
	I0819 18:11:51.260880  480779 host.go:66] Checking if "ha-651966" exists ...
	I0819 18:11:51.261198  480779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:11:51.261511  480779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-651966
	I0819 18:11:51.280368  480779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/ha-651966/id_rsa Username:docker}
	I0819 18:11:51.385574  480779 ssh_runner.go:195] Run: systemctl --version
	I0819 18:11:51.389804  480779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:11:51.401720  480779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:11:51.458213  480779 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-19 18:11:51.447729447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:11:51.458827  480779 kubeconfig.go:125] found "ha-651966" server: "https://192.168.49.254:8443"
	I0819 18:11:51.458862  480779 api_server.go:166] Checking apiserver status ...
	I0819 18:11:51.458908  480779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:11:51.470036  480779 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup
	I0819 18:11:51.479877  480779 api_server.go:182] apiserver freezer: "7:freezer:/docker/fb74e8cc2acc9b30a861cd2f1160b671edaf22015578680a7b163796487b1fc4/crio/crio-8b749b382ca3976563e01feaa9d13bb20a77c638390fc7d83caf63cd8a95929f"
	I0819 18:11:51.479954  480779 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fb74e8cc2acc9b30a861cd2f1160b671edaf22015578680a7b163796487b1fc4/crio/crio-8b749b382ca3976563e01feaa9d13bb20a77c638390fc7d83caf63cd8a95929f/freezer.state
	I0819 18:11:51.488916  480779 api_server.go:204] freezer state: "THAWED"
	I0819 18:11:51.488946  480779 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 18:11:51.497056  480779 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 18:11:51.497087  480779 status.go:422] ha-651966 apiserver status = Running (err=<nil>)
	I0819 18:11:51.497099  480779 status.go:257] ha-651966 status: &{Name:ha-651966 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:11:51.497153  480779 status.go:255] checking status of ha-651966-m02 ...
	I0819 18:11:51.497516  480779 cli_runner.go:164] Run: docker container inspect ha-651966-m02 --format={{.State.Status}}
	I0819 18:11:51.514383  480779 status.go:330] ha-651966-m02 host status = "Stopped" (err=<nil>)
	I0819 18:11:51.514408  480779 status.go:343] host is not running, skipping remaining checks
	I0819 18:11:51.514416  480779 status.go:257] ha-651966-m02 status: &{Name:ha-651966-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:11:51.514437  480779 status.go:255] checking status of ha-651966-m03 ...
	I0819 18:11:51.514760  480779 cli_runner.go:164] Run: docker container inspect ha-651966-m03 --format={{.State.Status}}
	I0819 18:11:51.531764  480779 status.go:330] ha-651966-m03 host status = "Running" (err=<nil>)
	I0819 18:11:51.531799  480779 host.go:66] Checking if "ha-651966-m03" exists ...
	I0819 18:11:51.532143  480779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-651966-m03
	I0819 18:11:51.549356  480779 host.go:66] Checking if "ha-651966-m03" exists ...
	I0819 18:11:51.549701  480779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:11:51.549749  480779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-651966-m03
	I0819 18:11:51.566960  480779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/ha-651966-m03/id_rsa Username:docker}
	I0819 18:11:51.662787  480779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:11:51.675793  480779 kubeconfig.go:125] found "ha-651966" server: "https://192.168.49.254:8443"
	I0819 18:11:51.675823  480779 api_server.go:166] Checking apiserver status ...
	I0819 18:11:51.675868  480779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:11:51.687088  480779 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1303/cgroup
	I0819 18:11:51.696607  480779 api_server.go:182] apiserver freezer: "7:freezer:/docker/b86a77014f59e671ab2a1d634ca553e8aa6122ae5f8389dd627c11720ec37653/crio/crio-96b873efc701befe7204c360c1f74d3737dae1335dfab147756cb91592812a23"
	I0819 18:11:51.696728  480779 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b86a77014f59e671ab2a1d634ca553e8aa6122ae5f8389dd627c11720ec37653/crio/crio-96b873efc701befe7204c360c1f74d3737dae1335dfab147756cb91592812a23/freezer.state
	I0819 18:11:51.705925  480779 api_server.go:204] freezer state: "THAWED"
	I0819 18:11:51.705954  480779 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 18:11:51.713882  480779 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 18:11:51.713923  480779 status.go:422] ha-651966-m03 apiserver status = Running (err=<nil>)
	I0819 18:11:51.713939  480779 status.go:257] ha-651966-m03 status: &{Name:ha-651966-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:11:51.713959  480779 status.go:255] checking status of ha-651966-m04 ...
	I0819 18:11:51.714290  480779 cli_runner.go:164] Run: docker container inspect ha-651966-m04 --format={{.State.Status}}
	I0819 18:11:51.731388  480779 status.go:330] ha-651966-m04 host status = "Running" (err=<nil>)
	I0819 18:11:51.731426  480779 host.go:66] Checking if "ha-651966-m04" exists ...
	I0819 18:11:51.731738  480779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-651966-m04
	I0819 18:11:51.749318  480779 host.go:66] Checking if "ha-651966-m04" exists ...
	I0819 18:11:51.749629  480779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:11:51.749676  480779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-651966-m04
	I0819 18:11:51.766687  480779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/ha-651966-m04/id_rsa Username:docker}
	I0819 18:11:51.863025  480779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:11:51.876790  480779 status.go:257] ha-651966-m04 status: &{Name:ha-651966-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 node start m02 -v=7 --alsologtostderr
E0819 18:11:56.294890  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:06.536965  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-651966 node start m02 -v=7 --alsologtostderr: (22.684989263s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr: (1.515025777s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.876186409s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (174.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-651966 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-651966 -v=7 --alsologtostderr
E0819 18:12:27.018376  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-651966 -v=7 --alsologtostderr: (36.976900882s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-651966 --wait=true -v=7 --alsologtostderr
E0819 18:13:07.980405  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:14:29.901839  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-651966 --wait=true -v=7 --alsologtostderr: (2m17.779815148s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-651966
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (174.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-651966 node delete m03 -v=7 --alsologtostderr: (11.787533926s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 stop -v=7 --alsologtostderr
E0819 18:15:38.149321  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-651966 stop -v=7 --alsologtostderr: (35.732496403s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr: exit status 7 (114.287934ms)

                                                
                                                
-- stdout --
	ha-651966
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-651966-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-651966-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:16:06.643516  494503 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:16:06.643638  494503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:16:06.643650  494503 out.go:358] Setting ErrFile to fd 2...
	I0819 18:16:06.643654  494503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:16:06.643924  494503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 18:16:06.644111  494503 out.go:352] Setting JSON to false
	I0819 18:16:06.644154  494503 mustload.go:65] Loading cluster: ha-651966
	I0819 18:16:06.644282  494503 notify.go:220] Checking for updates...
	I0819 18:16:06.644652  494503 config.go:182] Loaded profile config "ha-651966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:16:06.644665  494503 status.go:255] checking status of ha-651966 ...
	I0819 18:16:06.645184  494503 cli_runner.go:164] Run: docker container inspect ha-651966 --format={{.State.Status}}
	I0819 18:16:06.663469  494503 status.go:330] ha-651966 host status = "Stopped" (err=<nil>)
	I0819 18:16:06.663493  494503 status.go:343] host is not running, skipping remaining checks
	I0819 18:16:06.663508  494503 status.go:257] ha-651966 status: &{Name:ha-651966 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:16:06.663532  494503 status.go:255] checking status of ha-651966-m02 ...
	I0819 18:16:06.663860  494503 cli_runner.go:164] Run: docker container inspect ha-651966-m02 --format={{.State.Status}}
	I0819 18:16:06.695927  494503 status.go:330] ha-651966-m02 host status = "Stopped" (err=<nil>)
	I0819 18:16:06.695955  494503 status.go:343] host is not running, skipping remaining checks
	I0819 18:16:06.695962  494503 status.go:257] ha-651966-m02 status: &{Name:ha-651966-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:16:06.695989  494503 status.go:255] checking status of ha-651966-m04 ...
	I0819 18:16:06.696409  494503 cli_runner.go:164] Run: docker container inspect ha-651966-m04 --format={{.State.Status}}
	I0819 18:16:06.713379  494503 status.go:330] ha-651966-m04 host status = "Stopped" (err=<nil>)
	I0819 18:16:06.713405  494503 status.go:343] host is not running, skipping remaining checks
	I0819 18:16:06.713431  494503 status.go:257] ha-651966-m04 status: &{Name:ha-651966-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (109.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-651966 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 18:16:46.040329  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:17:13.743381  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-651966 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m48.90831267s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (109.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-651966 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-651966 --control-plane -v=7 --alsologtostderr: (1m13.781015732s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-651966 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-923490 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-923490 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (47.84273043s)
--- PASS: TestJSONOutput/start/Command (47.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-923490 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-923490 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-923490 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-923490 --output=json --user=testUser: (5.835069745s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-124367 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-124367 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.476055ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"816246bc-fadf-499d-88d9-459d8685afb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-124367] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b22d30fe-f86c-4704-8e83-e1522ea1b4ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19478"}}
	{"specversion":"1.0","id":"24460108-5688-4f2a-a6a1-6ad7a72c13ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"938d1b4e-9f3d-41f6-a5ab-b747c0f37bb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig"}}
	{"specversion":"1.0","id":"a41e370b-1698-4307-a069-eef59d11409c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube"}}
	{"specversion":"1.0","id":"6b798df2-0891-4408-b926-b191cf086619","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"861d130f-99aa-438c-9deb-b04f41843366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02c0572d-38a0-4d38-969d-95144fce71ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-124367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-124367
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-674790 --network=
E0819 18:20:38.149590  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-674790 --network=: (38.724349525s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-674790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-674790
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-674790: (2.077348414s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-285599 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-285599 --network=bridge: (33.637006135s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-285599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-285599
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-285599: (2.024641643s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.69s)

                                                
                                    
x
+
TestKicExistingNetwork (32.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-617179 --network=existing-network
E0819 18:21:46.040371  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:22:01.217550  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-617179 --network=existing-network: (30.68781787s)
helpers_test.go:175: Cleaning up "existing-network-617179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-617179
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-617179: (2.048506014s)
--- PASS: TestKicExistingNetwork (32.90s)

                                                
                                    
x
+
TestKicCustomSubnet (33.7s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-542952 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-542952 --subnet=192.168.60.0/24: (31.63351554s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-542952 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-542952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-542952
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-542952: (2.039605004s)
--- PASS: TestKicCustomSubnet (33.70s)

                                                
                                    
x
+
TestKicStaticIP (34.26s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-440638 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-440638 --static-ip=192.168.200.200: (31.959180321s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-440638 ip
helpers_test.go:175: Cleaning up "static-ip-440638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-440638
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-440638: (2.144765554s)
--- PASS: TestKicStaticIP (34.26s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-132054 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-132054 --driver=docker  --container-runtime=crio: (30.982296199s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-135180 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-135180 --driver=docker  --container-runtime=crio: (33.49333622s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-132054
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-135180
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-135180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-135180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-135180: (2.02329792s)
helpers_test.go:175: Cleaning up "first-132054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-132054
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-132054: (2.31063824s)
--- PASS: TestMinikubeProfile (70.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-582590 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-582590 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.469420665s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-582590 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-595947 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-595947 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.341638665s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-595947 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-582590 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-582590 --alsologtostderr -v=5: (1.635733026s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-595947 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-595947
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-595947: (1.207371095s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-595947
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-595947: (7.022434365s)
--- PASS: TestMountStart/serial/RestartStopped (8.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-595947 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538771 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 18:25:38.148921  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-538771 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.277390646s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-538771 -- rollout status deployment/busybox: (2.623236781s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-5bv5q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-blhwv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-5bv5q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-blhwv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-5bv5q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-blhwv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-5bv5q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-5bv5q -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-blhwv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538771 -- exec busybox-7dff88458-blhwv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-538771 -v 3 --alsologtostderr
E0819 18:26:46.041052  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-538771 -v 3 --alsologtostderr: (29.70464112s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-538771 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp testdata/cp-test.txt multinode-538771:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2087052874/001/cp-test_multinode-538771.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771:/home/docker/cp-test.txt multinode-538771-m02:/home/docker/cp-test_multinode-538771_multinode-538771-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m02 "sudo cat /home/docker/cp-test_multinode-538771_multinode-538771-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771:/home/docker/cp-test.txt multinode-538771-m03:/home/docker/cp-test_multinode-538771_multinode-538771-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m03 "sudo cat /home/docker/cp-test_multinode-538771_multinode-538771-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp testdata/cp-test.txt multinode-538771-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2087052874/001/cp-test_multinode-538771-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771-m02:/home/docker/cp-test.txt multinode-538771:/home/docker/cp-test_multinode-538771-m02_multinode-538771.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771 "sudo cat /home/docker/cp-test_multinode-538771-m02_multinode-538771.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771-m02:/home/docker/cp-test.txt multinode-538771-m03:/home/docker/cp-test_multinode-538771-m02_multinode-538771-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m03 "sudo cat /home/docker/cp-test_multinode-538771-m02_multinode-538771-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp testdata/cp-test.txt multinode-538771-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2087052874/001/cp-test_multinode-538771-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771-m03:/home/docker/cp-test.txt multinode-538771:/home/docker/cp-test_multinode-538771-m03_multinode-538771.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771 "sudo cat /home/docker/cp-test_multinode-538771-m03_multinode-538771.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 cp multinode-538771-m03:/home/docker/cp-test.txt multinode-538771-m02:/home/docker/cp-test_multinode-538771-m03_multinode-538771-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 ssh -n multinode-538771-m02 "sudo cat /home/docker/cp-test_multinode-538771-m03_multinode-538771-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-538771 node stop m03: (1.408733665s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-538771 status: exit status 7 (528.916736ms)

                                                
                                                
-- stdout --
	multinode-538771
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-538771-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-538771-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-538771 status --alsologtostderr: exit status 7 (515.350952ms)

                                                
                                                
-- stdout --
	multinode-538771
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-538771-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-538771-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:27:01.946090  547748 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:27:01.946249  547748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:27:01.946260  547748 out.go:358] Setting ErrFile to fd 2...
	I0819 18:27:01.946266  547748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:27:01.946524  547748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 18:27:01.946708  547748 out.go:352] Setting JSON to false
	I0819 18:27:01.946749  547748 mustload.go:65] Loading cluster: multinode-538771
	I0819 18:27:01.946851  547748 notify.go:220] Checking for updates...
	I0819 18:27:01.947217  547748 config.go:182] Loaded profile config "multinode-538771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:27:01.947230  547748 status.go:255] checking status of multinode-538771 ...
	I0819 18:27:01.947738  547748 cli_runner.go:164] Run: docker container inspect multinode-538771 --format={{.State.Status}}
	I0819 18:27:01.969589  547748 status.go:330] multinode-538771 host status = "Running" (err=<nil>)
	I0819 18:27:01.969616  547748 host.go:66] Checking if "multinode-538771" exists ...
	I0819 18:27:01.969937  547748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-538771
	I0819 18:27:01.991874  547748 host.go:66] Checking if "multinode-538771" exists ...
	I0819 18:27:01.992196  547748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:27:01.992266  547748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-538771
	I0819 18:27:02.016996  547748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33301 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/multinode-538771/id_rsa Username:docker}
	I0819 18:27:02.109819  547748 ssh_runner.go:195] Run: systemctl --version
	I0819 18:27:02.114555  547748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:27:02.126368  547748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:27:02.192150  547748 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-19 18:27:02.181644289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:27:02.192798  547748 kubeconfig.go:125] found "multinode-538771" server: "https://192.168.67.2:8443"
	I0819 18:27:02.192835  547748 api_server.go:166] Checking apiserver status ...
	I0819 18:27:02.192882  547748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:27:02.203747  547748 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup
	I0819 18:27:02.213398  547748 api_server.go:182] apiserver freezer: "7:freezer:/docker/55f6211049dafaa4d7c98f660c6388c45b8e24b0626f25bded641b9f18d9ec06/crio/crio-f4c85ffe460445f1108687ffae5f90b877b7e0676aa1e178f2cb6defffc58360"
	I0819 18:27:02.213478  547748 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/55f6211049dafaa4d7c98f660c6388c45b8e24b0626f25bded641b9f18d9ec06/crio/crio-f4c85ffe460445f1108687ffae5f90b877b7e0676aa1e178f2cb6defffc58360/freezer.state
	I0819 18:27:02.222555  547748 api_server.go:204] freezer state: "THAWED"
	I0819 18:27:02.222598  547748 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 18:27:02.231331  547748 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 18:27:02.231359  547748 status.go:422] multinode-538771 apiserver status = Running (err=<nil>)
	I0819 18:27:02.231370  547748 status.go:257] multinode-538771 status: &{Name:multinode-538771 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:27:02.231387  547748 status.go:255] checking status of multinode-538771-m02 ...
	I0819 18:27:02.231743  547748 cli_runner.go:164] Run: docker container inspect multinode-538771-m02 --format={{.State.Status}}
	I0819 18:27:02.249106  547748 status.go:330] multinode-538771-m02 host status = "Running" (err=<nil>)
	I0819 18:27:02.249135  547748 host.go:66] Checking if "multinode-538771-m02" exists ...
	I0819 18:27:02.249471  547748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-538771-m02
	I0819 18:27:02.267388  547748 host.go:66] Checking if "multinode-538771-m02" exists ...
	I0819 18:27:02.267711  547748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:27:02.267860  547748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-538771-m02
	I0819 18:27:02.285532  547748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/19478-429440/.minikube/machines/multinode-538771-m02/id_rsa Username:docker}
	I0819 18:27:02.377222  547748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:27:02.388835  547748 status.go:257] multinode-538771-m02 status: &{Name:multinode-538771-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:27:02.388869  547748 status.go:255] checking status of multinode-538771-m03 ...
	I0819 18:27:02.389188  547748 cli_runner.go:164] Run: docker container inspect multinode-538771-m03 --format={{.State.Status}}
	I0819 18:27:02.406057  547748 status.go:330] multinode-538771-m03 host status = "Stopped" (err=<nil>)
	I0819 18:27:02.406083  547748 status.go:343] host is not running, skipping remaining checks
	I0819 18:27:02.406090  547748 status.go:257] multinode-538771-m03 status: &{Name:multinode-538771-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-538771 node start m03 -v=7 --alsologtostderr: (9.170553727s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-538771
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-538771
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-538771: (24.848995875s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538771 --wait=true -v=8 --alsologtostderr
E0819 18:28:09.104845  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-538771 --wait=true -v=8 --alsologtostderr: (1m31.362086294s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-538771
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-538771 node delete m03: (5.044849844s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-538771 stop: (23.627887991s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-538771 status: exit status 7 (87.425963ms)

                                                
                                                
-- stdout --
	multinode-538771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-538771-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-538771 status --alsologtostderr: exit status 7 (82.236477ms)

                                                
                                                
-- stdout --
	multinode-538771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-538771-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:29:38.169476  555561 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:29:38.169676  555561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:29:38.169702  555561 out.go:358] Setting ErrFile to fd 2...
	I0819 18:29:38.169721  555561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:29:38.170012  555561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 18:29:38.170242  555561 out.go:352] Setting JSON to false
	I0819 18:29:38.170310  555561 mustload.go:65] Loading cluster: multinode-538771
	I0819 18:29:38.170408  555561 notify.go:220] Checking for updates...
	I0819 18:29:38.170834  555561 config.go:182] Loaded profile config "multinode-538771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:29:38.170870  555561 status.go:255] checking status of multinode-538771 ...
	I0819 18:29:38.171438  555561 cli_runner.go:164] Run: docker container inspect multinode-538771 --format={{.State.Status}}
	I0819 18:29:38.190007  555561 status.go:330] multinode-538771 host status = "Stopped" (err=<nil>)
	I0819 18:29:38.190029  555561 status.go:343] host is not running, skipping remaining checks
	I0819 18:29:38.190036  555561 status.go:257] multinode-538771 status: &{Name:multinode-538771 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:29:38.190060  555561 status.go:255] checking status of multinode-538771-m02 ...
	I0819 18:29:38.190368  555561 cli_runner.go:164] Run: docker container inspect multinode-538771-m02 --format={{.State.Status}}
	I0819 18:29:38.206902  555561 status.go:330] multinode-538771-m02 host status = "Stopped" (err=<nil>)
	I0819 18:29:38.206922  555561 status.go:343] host is not running, skipping remaining checks
	I0819 18:29:38.206930  555561 status.go:257] multinode-538771-m02 status: &{Name:multinode-538771-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538771 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-538771 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.496360759s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538771 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-538771
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538771-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-538771-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.888127ms)

                                                
                                                
-- stdout --
	* [multinode-538771-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-538771-m02' is duplicated with machine name 'multinode-538771-m02' in profile 'multinode-538771'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538771-m03 --driver=docker  --container-runtime=crio
E0819 18:30:38.149137  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-538771-m03 --driver=docker  --container-runtime=crio: (30.353493312s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-538771
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-538771: exit status 80 (316.281337ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-538771 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-538771-m03 already exists in multinode-538771-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-538771-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-538771-m03: (2.039779312s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.86s)

                                                
                                    
x
+
TestPreload (128.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-611819 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0819 18:31:46.040380  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-611819 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.56303052s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-611819 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-611819 image pull gcr.io/k8s-minikube/busybox: (1.874223205s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-611819
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-611819: (5.785986009s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-611819 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-611819 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.636134575s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-611819 image list
helpers_test.go:175: Cleaning up "test-preload-611819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-611819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-611819: (2.467820597s)
--- PASS: TestPreload (128.61s)

                                                
                                    
x
+
TestScheduledStopUnix (108.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-516868 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-516868 --memory=2048 --driver=docker  --container-runtime=crio: (31.105928968s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-516868 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-516868 -n scheduled-stop-516868
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-516868 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-516868 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-516868 -n scheduled-stop-516868
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-516868
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-516868 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-516868
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-516868: exit status 7 (76.124841ms)

                                                
                                                
-- stdout --
	scheduled-stop-516868
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-516868 -n scheduled-stop-516868
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-516868 -n scheduled-stop-516868: exit status 7 (77.035363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-516868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-516868
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-516868: (5.399602865s)
--- PASS: TestScheduledStopUnix (108.05s)

                                                
                                    
x
+
TestInsufficientStorage (12.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-132256 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-132256 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.379434356s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b3816ef8-4d2e-4c8c-81c6-716e9d80e4a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-132256] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"87451076-f0c2-4433-acb3-3390ee642f15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19478"}}
	{"specversion":"1.0","id":"7a9671a8-2b3f-44ce-af1b-6e2cca39e982","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba7bdc68-8006-4d6b-a47c-b6aa8a99579d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig"}}
	{"specversion":"1.0","id":"c81f51eb-906a-4729-8171-4eb80414511c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube"}}
	{"specversion":"1.0","id":"c37f213d-6d69-447c-b9ae-ed7dc09d373f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"deaf9878-174a-4173-ba7d-3f07ff9de6a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eb1485cd-eb03-4c59-8b3c-8f7eb129c590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8256ac29-25c7-47ed-9722-9f782779a551","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ce728447-0f19-49d7-933d-3ee51653396f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffa2c115-c9e2-4c2b-aaf4-c540ad05fb1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fd8432ee-f537-4163-abb7-9c0f37004c4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-132256\" primary control-plane node in \"insufficient-storage-132256\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f96f3c26-8e17-408f-871e-3fbba6a68d57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724062045-19478 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"684f5f79-2eab-4c15-af32-fe2f622e2867","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b0734a6-b397-4262-8933-c336c2e00ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-132256 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-132256 --output=json --layout=cluster: exit status 7 (285.431166ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-132256","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-132256","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:35:17.685612  573274 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-132256" does not appear in /home/jenkins/minikube-integration/19478-429440/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-132256 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-132256 --output=json --layout=cluster: exit status 7 (291.215561ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-132256","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-132256","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:35:17.977954  573334 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-132256" does not appear in /home/jenkins/minikube-integration/19478-429440/kubeconfig
	E0819 18:35:17.988625  573334 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/insufficient-storage-132256/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-132256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-132256
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-132256: (1.986409209s)
--- PASS: TestInsufficientStorage (12.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2095318559 start -p running-upgrade-101942 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2095318559 start -p running-upgrade-101942 --memory=2200 --vm-driver=docker  --container-runtime=crio: (48.911073493s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-101942 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-101942 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.912675622s)
helpers_test.go:175: Cleaning up "running-upgrade-101942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-101942
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-101942: (3.313881521s)
--- PASS: TestRunningBinaryUpgrade (79.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-843539 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0819 18:36:46.040456  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-843539 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m12.994924514s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-843539
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-843539: (3.475548914s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-843539 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-843539 status --format={{.Host}}: exit status 7 (182.969028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-843539 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-843539 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.716278708s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-843539 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-843539 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-843539 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (119.568811ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-843539] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-843539
	    minikube start -p kubernetes-upgrade-843539 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8435392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-843539 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-843539 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-843539 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.692104584s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-843539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-843539
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-843539: (2.514548561s)
--- PASS: TestKubernetesUpgrade (390.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.11s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.567940060 start -p missing-upgrade-824233 --memory=2200 --driver=docker  --container-runtime=crio
E0819 18:35:38.148741  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.567940060 start -p missing-upgrade-824233 --memory=2200 --driver=docker  --container-runtime=crio: (1m14.311768612s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-824233
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-824233: (11.792477646s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-824233
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-824233 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-824233 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.837153194s)
helpers_test.go:175: Cleaning up "missing-upgrade-824233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-824233
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-824233: (1.984042515s)
--- PASS: TestMissingContainerUpgrade (152.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-091368 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-091368 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (84.729372ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-091368] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-091368 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-091368 --driver=docker  --container-runtime=crio: (38.526827363s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-091368 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-091368 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-091368 --no-kubernetes --driver=docker  --container-runtime=crio: (13.731311542s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-091368 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-091368 status -o json: exit status 2 (359.642778ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-091368","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-091368
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-091368: (2.172090967s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-091368 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-091368 --no-kubernetes --driver=docker  --container-runtime=crio: (10.747214609s)
--- PASS: TestNoKubernetes/serial/Start (10.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-091368 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-091368 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.331132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (4.790488836s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-091368
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-091368: (1.299862053s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-091368 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-091368 --driver=docker  --container-runtime=crio: (7.844846149s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-091368 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-091368 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.171981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (75.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.282412418 start -p stopped-upgrade-305859 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.282412418 start -p stopped-upgrade-305859 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.232122575s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.282412418 -p stopped-upgrade-305859 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.282412418 -p stopped-upgrade-305859 stop: (2.086176349s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-305859 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0819 18:38:41.219671  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-305859 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.01543684s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (75.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-305859
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-305859: (1.259031552s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (54.47s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-177782 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0819 18:40:38.149243  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-177782 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.467149018s)
--- PASS: TestPause/serial/Start (54.47s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-177782 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0819 18:41:46.040488  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-177782 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.334456325s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-177782 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-177782 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-177782 --output=json --layout=cluster: exit status 2 (439.330943ms)

                                                
                                                
-- stdout --
	{"Name":"pause-177782","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-177782","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-177782 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-177782 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-177782 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-177782 --alsologtostderr -v=5: (2.45120653s)
--- PASS: TestPause/serial/DeletePaused (2.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-177782
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-177782: exit status 1 (14.284297ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-177782: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-489303 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-489303 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (178.881992ms)

                                                
                                                
-- stdout --
	* [false-489303] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:42:50.631155  613913 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:42:50.631275  613913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:42:50.631286  613913 out.go:358] Setting ErrFile to fd 2...
	I0819 18:42:50.631291  613913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:42:50.631527  613913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-429440/.minikube/bin
	I0819 18:42:50.631943  613913 out.go:352] Setting JSON to false
	I0819 18:42:50.632909  613913 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8717,"bootTime":1724084253,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0819 18:42:50.632979  613913 start.go:139] virtualization:  
	I0819 18:42:50.636321  613913 out.go:177] * [false-489303] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 18:42:50.638877  613913 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:42:50.639034  613913 notify.go:220] Checking for updates...
	I0819 18:42:50.643861  613913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:42:50.646548  613913 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-429440/kubeconfig
	I0819 18:42:50.649195  613913 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-429440/.minikube
	I0819 18:42:50.651789  613913 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 18:42:50.654497  613913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:42:50.657672  613913 config.go:182] Loaded profile config "kubernetes-upgrade-843539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:42:50.657778  613913 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:42:50.681897  613913 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:42:50.682004  613913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:42:50.748110  613913 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 18:42:50.738651768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:42:50.748287  613913 docker.go:307] overlay module found
	I0819 18:42:50.752927  613913 out.go:177] * Using the docker driver based on user configuration
	I0819 18:42:50.755573  613913 start.go:297] selected driver: docker
	I0819 18:42:50.755588  613913 start.go:901] validating driver "docker" against <nil>
	I0819 18:42:50.755627  613913 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:42:50.758743  613913 out.go:201] 
	W0819 18:42:50.761423  613913 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0819 18:42:50.763948  613913 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-489303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-489303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 18:42:46 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-843539
contexts:
- context:
cluster: kubernetes-upgrade-843539
extensions:
- extension:
last-update: Mon, 19 Aug 2024 18:42:46 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-843539
name: kubernetes-upgrade-843539
current-context: kubernetes-upgrade-843539
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-843539
user:
client-certificate: /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kubernetes-upgrade-843539/client.crt
client-key: /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kubernetes-upgrade-843539/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-489303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-489303"

                                                
                                                
----------------------- debugLogs end: false-489303 [took: 3.255087021s] --------------------------------
helpers_test.go:175: Cleaning up "false-489303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-489303
--- PASS: TestNetworkPlugins/group/false (3.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (150.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-888936 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0819 18:44:49.107296  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:45:38.149274  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:46.040942  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-888936 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m30.937653334s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (150.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-888936 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2027d5d9-70c0-45b7-83ea-3d5d63a99e7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2027d5d9-70c0-45b7-83ea-3d5d63a99e7f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005262279s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-888936 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-888936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-888936 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-888936 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-888936 --alsologtostderr -v=3: (12.03559216s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-888936 -n old-k8s-version-888936
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-888936 -n old-k8s-version-888936: exit status 7 (128.213553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-888936 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (136.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-888936 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-888936 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m16.397533036s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-888936 -n old-k8s-version-888936
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (136.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-077181 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-077181 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (57.931310668s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-077181 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7010ab1e-9fd9-4aaa-8772-547ef6c52c1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7010ab1e-9fd9-4aaa-8772-547ef6c52c1f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00525247s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-077181 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-077181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-077181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.039992944s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-077181 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-077181 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-077181 --alsologtostderr -v=3: (11.944753542s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-077181 -n embed-certs-077181
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-077181 -n embed-certs-077181: exit status 7 (70.303696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-077181 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (276.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-077181 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-077181 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m36.602758571s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-077181 -n embed-certs-077181
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (276.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gspdb" [cb569435-9b35-40fe-98bc-130f92ce74cf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003700708s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gspdb" [cb569435-9b35-40fe-98bc-130f92ce74cf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004720817s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-888936 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-888936 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-888936 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-888936 -n old-k8s-version-888936
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-888936 -n old-k8s-version-888936: exit status 2 (323.26018ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-888936 -n old-k8s-version-888936
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-888936 -n old-k8s-version-888936: exit status 2 (325.073051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-888936 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-888936 -n old-k8s-version-888936
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-888936 -n old-k8s-version-888936
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-004798 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:50:38.149106  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-004798 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m8.00662156s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-004798 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72c1c803-a21a-4d06-9a1f-84f6fa6509bd] Pending
helpers_test.go:344: "busybox" [72c1c803-a21a-4d06-9a1f-84f6fa6509bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72c1c803-a21a-4d06-9a1f-84f6fa6509bd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003130775s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-004798 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-004798 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-004798 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-004798 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-004798 --alsologtostderr -v=3: (11.891926241s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-004798 -n no-preload-004798
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-004798 -n no-preload-004798: exit status 7 (68.428733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-004798 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (302.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-004798 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:51:46.040747  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:51.104088  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:51.110613  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:51.122189  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:51.143625  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:51.185114  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:51.266541  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:51.428075  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:51.750189  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:52.392298  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:53.674122  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:56.236306  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:52:01.358668  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:52:11.600259  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:52:32.081751  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:53:13.043809  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-004798 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (5m1.772035772s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-004798 -n no-preload-004798
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (302.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5pgkp" [90fd19fe-231c-49a3-bc40-ac9d7a2fdf9d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00434909s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5pgkp" [90fd19fe-231c-49a3-bc40-ac9d7a2fdf9d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003769233s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-077181 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-077181 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-077181 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-077181 -n embed-certs-077181
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-077181 -n embed-certs-077181: exit status 2 (322.032942ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-077181 -n embed-certs-077181
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-077181 -n embed-certs-077181: exit status 2 (303.483797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-077181 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-077181 -n embed-certs-077181
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-077181 -n embed-certs-077181
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-442643 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-442643 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (52.297533947s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-442643 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72299569-7f3a-4a5d-bbd3-accf5d20c667] Pending
helpers_test.go:344: "busybox" [72299569-7f3a-4a5d-bbd3-accf5d20c667] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72299569-7f3a-4a5d-bbd3-accf5d20c667] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003626534s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-442643 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-442643 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0819 18:54:34.965385  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-442643 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-442643 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-442643 --alsologtostderr -v=3: (11.923117338s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643: exit status 7 (69.738332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-442643 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-442643 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:55:21.221165  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:55:38.149599  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-442643 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m27.213781694s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-84jw8" [1653fa9a-0023-40a7-b3b9-91459f37d90d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004475702s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-84jw8" [1653fa9a-0023-40a7-b3b9-91459f37d90d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004518065s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-004798 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-004798 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-004798 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-004798 -n no-preload-004798
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-004798 -n no-preload-004798: exit status 2 (310.280046ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-004798 -n no-preload-004798
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-004798 -n no-preload-004798: exit status 2 (359.919406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-004798 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-004798 -n no-preload-004798
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-004798 -n no-preload-004798
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-459595 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:56:46.040914  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:51.103978  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-459595 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (39.09667631s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-459595 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-459595 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.239949277s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-459595 --alsologtostderr -v=3
E0819 18:57:18.807494  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-459595 --alsologtostderr -v=3: (1.243863172s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-459595 -n newest-cni-459595
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-459595 -n newest-cni-459595: exit status 7 (86.548758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-459595 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-459595 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-459595 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (15.044094635s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-459595 -n newest-cni-459595
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-459595 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-459595 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-459595 -n newest-cni-459595
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-459595 -n newest-cni-459595: exit status 2 (319.052991ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-459595 -n newest-cni-459595
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-459595 -n newest-cni-459595: exit status 2 (318.129394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-459595 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-459595 -n newest-cni-459595
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-459595 -n newest-cni-459595
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (50.998284736s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8v78m" [663e4554-82a8-4e18-b7db-6c6f79d9519a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004048832s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-489303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-489303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c2shz" [304d9f97-3a6f-4614-9880-ba63d17291d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c2shz" [304d9f97-3a6f-4614-9880-ba63d17291d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004774387s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-489303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (57.109574887s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sdq8j" [2f105dd3-572e-4681-82a9-f0e70995cae0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005415265s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sdq8j" [2f105dd3-572e-4681-82a9-f0e70995cae0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004294509s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-442643 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-442643 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-442643 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643: exit status 2 (433.524313ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643: exit status 2 (404.260592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-442643 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-442643 -n default-k8s-diff-port-442643
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.80s)
E0819 19:04:26.169819  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:26.176474  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:26.188650  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:26.210071  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:26.251484  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:26.333331  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:26.494815  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:26.816987  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:27.459175  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:28.742099  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:04:31.304441  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/default-k8s-diff-port-442643/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.925892488s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-489303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-489303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wq8hb" [c4d79b9f-9fce-4626-9297-b1cffc73e64c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wq8hb" [c4d79b9f-9fce-4626-9297-b1cffc73e64c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.003764398s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-489303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-czx8p" [c66eec83-8b58-47f7-937b-ef20a3e3f4d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004679711s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-489303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-489303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2vs5k" [50225cb1-2b8d-46a6-8377-321abdad44a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2vs5k" [50225cb1-2b8d-46a6-8377-321abdad44a5] Running
E0819 19:00:38.149233  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/addons-778133/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.01142567s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m19.797505066s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-489303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0819 19:01:17.023985  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/no-preload-004798/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:01:29.108663  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:01:37.506113  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/no-preload-004798/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:01:46.040443  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/functional-993381/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:01:51.104667  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/old-k8s-version-888936/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m15.635522179s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-489303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-489303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kpq7j" [1b5da995-5225-4f92-b8ea-c60d28c7e78e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kpq7j" [1b5da995-5225-4f92-b8ea-c60d28c7e78e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003144094s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-489303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-489303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-489303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nn2lv" [ad716dd8-99ad-4caf-8e56-b615e1b7cdc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nn2lv" [ad716dd8-99ad-4caf-8e56-b615e1b7cdc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.003386744s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.454890918s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-489303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0819 19:03:32.209378  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:32.215716  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:32.227046  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:32.248751  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:32.290067  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:32.371611  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:32.533425  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:32.855266  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:33.497399  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:34.778697  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:37.341471  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
E0819 19:03:40.389575  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/no-preload-004798/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-489303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.150245805s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ktkdq" [bedd9a54-872d-4451-b112-d1188f503722] Running
E0819 19:03:42.462906  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004636444s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-489303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-489303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pwpp4" [dad28b2c-c304-4a57-8c24-74b8f3fc53b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 19:03:52.704564  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-pwpp4" [dad28b2c-c304-4a57-8c24-74b8f3fc53b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004491484s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-489303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-489303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-489303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-86mlv" [4a973153-84d5-4ecd-abe9-66b914bdbf11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-86mlv" [4a973153-84d5-4ecd-abe9-66b914bdbf11] Running
E0819 19:04:13.186272  434827 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kindnet-489303/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004866045s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-489303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-489303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-552596 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-552596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-552596
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-627450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-627450
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-489303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-489303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 18:42:46 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-843539
contexts:
- context:
cluster: kubernetes-upgrade-843539
extensions:
- extension:
last-update: Mon, 19 Aug 2024 18:42:46 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-843539
name: kubernetes-upgrade-843539
current-context: kubernetes-upgrade-843539
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-843539
user:
client-certificate: /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kubernetes-upgrade-843539/client.crt
client-key: /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kubernetes-upgrade-843539/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-489303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-489303"

                                                
                                                
----------------------- debugLogs end: kubenet-489303 [took: 3.474449351s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-489303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-489303
--- SKIP: TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-489303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-489303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19478-429440/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 18:42:46 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-843539
contexts:
- context:
cluster: kubernetes-upgrade-843539
extensions:
- extension:
last-update: Mon, 19 Aug 2024 18:42:46 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-843539
name: kubernetes-upgrade-843539
current-context: kubernetes-upgrade-843539
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-843539
user:
client-certificate: /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kubernetes-upgrade-843539/client.crt
client-key: /home/jenkins/minikube-integration/19478-429440/.minikube/profiles/kubernetes-upgrade-843539/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-489303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-489303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-489303"

                                                
                                                
----------------------- debugLogs end: cilium-489303 [took: 3.657715632s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-489303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-489303
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
Copied to clipboard