Test Report: Docker_Linux_crio 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (3/327)

Order failed test Duration
33 TestAddons/parallel/Registry 74.71
34 TestAddons/parallel/Ingress 152.94
36 TestAddons/parallel/MetricsServer 296.53
x
+
TestAddons/parallel/Registry (74.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.297544ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-b4j85" [88d02c55-38b5-4e2b-9986-5f7887226e63] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003528689s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x8xl5" [22fc174a-6a59-45df-b8e0-fd97f697901c] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003304211s
addons_test.go:338: (dbg) Run:  kubectl --context addons-162403 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-162403 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-162403 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.07645086s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-162403 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 ip
2024/09/20 18:30:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-162403
helpers_test.go:235: (dbg) docker inspect addons-162403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7",
	        "Created": "2024-09-20T18:19:01.134918747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 674901,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T18:19:01.25004308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/hosts",
	        "LogPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7-json.log",
	        "Name": "/addons-162403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-162403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-162403",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3-init/diff:/var/lib/docker/overlay2/eaa029c0352c09d5301213b292ed71be17ad3c7af9b304910b3afcbb6087e2a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-162403",
	                "Source": "/var/lib/docker/volumes/addons-162403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-162403",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-162403",
	                "name.minikube.sigs.k8s.io": "addons-162403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "488a22f7f2606afe4be623bfdfd275b5b8331f1b931576ea9ec822158b58c0ce",
	            "SandboxKey": "/var/run/docker/netns/488a22f7f260",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-162403": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d0901782c3c8698a9caccb5c84dc1c7ad2c5eb6d0b068119a7aad73f3dbaa435",
	                    "EndpointID": "035274aa910e41c214a6f521c4fc53fb707a6152897b47a404b57c9e4e462cf6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-162403",
	                        "106a9fd3effc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-162403 -n addons-162403
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-162403 logs -n 25: (2.976667894s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-536443                                                                     | download-only-536443   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| start   | -o=json --download-only                                                                     | download-only-183655   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | -p download-only-183655                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| delete  | -p download-only-183655                                                                     | download-only-183655   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| delete  | -p download-only-536443                                                                     | download-only-536443   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| delete  | -p download-only-183655                                                                     | download-only-183655   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| start   | --download-only -p                                                                          | download-docker-729301 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | download-docker-729301                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-729301                                                                   | download-docker-729301 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-249385   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | binary-mirror-249385                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43551                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-249385                                                                     | binary-mirror-249385   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| addons  | enable dashboard -p                                                                         | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-162403 --wait=true                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC | 20 Sep 24 18:29 UTC |
	|         | -p addons-162403                                                                            |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC | 20 Sep 24 18:29 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-162403 ssh curl -s                                                                   | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-162403 addons                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-162403 addons                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-162403 ssh cat                                                                       | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | /opt/local-path-provisioner/pvc-7362d9da-c19d-46d1-ab52-e395c2ebef40_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | -p addons-162403                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-162403 ip                                                                            | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:18:38
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:18:38.955255  674168 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:18:38.955393  674168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:38.955405  674168 out.go:358] Setting ErrFile to fd 2...
	I0920 18:18:38.955420  674168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:38.955592  674168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:18:38.956218  674168 out.go:352] Setting JSON to false
	I0920 18:18:38.957151  674168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7263,"bootTime":1726849056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:18:38.957258  674168 start.go:139] virtualization: kvm guest
	I0920 18:18:38.959268  674168 out.go:177] * [addons-162403] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:18:38.960748  674168 notify.go:220] Checking for updates...
	I0920 18:18:38.960767  674168 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:18:38.962055  674168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:18:38.963377  674168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:18:38.964538  674168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	I0920 18:18:38.965672  674168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:18:38.966885  674168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:18:38.968185  674168 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:18:38.989387  674168 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:18:38.989471  674168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:39.033969  674168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:18:39.025186058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:39.034102  674168 docker.go:318] overlay module found
	I0920 18:18:39.035798  674168 out.go:177] * Using the docker driver based on user configuration
	I0920 18:18:39.037025  674168 start.go:297] selected driver: docker
	I0920 18:18:39.037039  674168 start.go:901] validating driver "docker" against <nil>
	I0920 18:18:39.037051  674168 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:18:39.037947  674168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:39.085086  674168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:18:39.076841302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:39.085255  674168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:18:39.085496  674168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:18:39.087167  674168 out.go:177] * Using Docker driver with root privileges
	I0920 18:18:39.088532  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:18:39.088595  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:18:39.088606  674168 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:18:39.088665  674168 start.go:340] cluster config:
	{Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:39.089923  674168 out.go:177] * Starting "addons-162403" primary control-plane node in "addons-162403" cluster
	I0920 18:18:39.091072  674168 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:18:39.092598  674168 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:18:39.094070  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:39.094104  674168 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:18:39.094121  674168 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:18:39.094133  674168 cache.go:56] Caching tarball of preloaded images
	I0920 18:18:39.094252  674168 preload.go:172] Found /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:18:39.094263  674168 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:18:39.094613  674168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json ...
	I0920 18:18:39.094639  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json: {Name:mka678336c738f0ad3cca0a057f366143df6dca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:39.109272  674168 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:18:39.109425  674168 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:18:39.109447  674168 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:18:39.109453  674168 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:18:39.109467  674168 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:18:39.109477  674168 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 18:18:51.189040  674168 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 18:18:51.189079  674168 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:18:51.189135  674168 start.go:360] acquireMachinesLock for addons-162403: {Name:mk331c03eda7bf008a5f6618682622fc66137de8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:18:51.189234  674168 start.go:364] duration metric: took 78.073µs to acquireMachinesLock for "addons-162403"
	I0920 18:18:51.189258  674168 start.go:93] Provisioning new machine with config: &{Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:51.189337  674168 start.go:125] createHost starting for "" (driver="docker")
	I0920 18:18:51.191508  674168 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 18:18:51.191775  674168 start.go:159] libmachine.API.Create for "addons-162403" (driver="docker")
	I0920 18:18:51.191808  674168 client.go:168] LocalClient.Create starting
	I0920 18:18:51.191901  674168 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem
	I0920 18:18:51.507907  674168 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem
	I0920 18:18:51.677159  674168 cli_runner.go:164] Run: docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 18:18:51.691915  674168 cli_runner.go:211] docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 18:18:51.692010  674168 network_create.go:284] running [docker network inspect addons-162403] to gather additional debugging logs...
	I0920 18:18:51.692035  674168 cli_runner.go:164] Run: docker network inspect addons-162403
	W0920 18:18:51.707711  674168 cli_runner.go:211] docker network inspect addons-162403 returned with exit code 1
	I0920 18:18:51.707746  674168 network_create.go:287] error running [docker network inspect addons-162403]: docker network inspect addons-162403: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-162403 not found
	I0920 18:18:51.707769  674168 network_create.go:289] output of [docker network inspect addons-162403]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-162403 not found
	
	** /stderr **
	I0920 18:18:51.707870  674168 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:18:51.723682  674168 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a5f410}
	I0920 18:18:51.723727  674168 network_create.go:124] attempt to create docker network addons-162403 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 18:18:51.723786  674168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-162403 addons-162403
	I0920 18:18:51.787135  674168 network_create.go:108] docker network addons-162403 192.168.49.0/24 created
	I0920 18:18:51.787171  674168 kic.go:121] calculated static IP "192.168.49.2" for the "addons-162403" container
	I0920 18:18:51.787234  674168 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 18:18:51.802456  674168 cli_runner.go:164] Run: docker volume create addons-162403 --label name.minikube.sigs.k8s.io=addons-162403 --label created_by.minikube.sigs.k8s.io=true
	I0920 18:18:51.819456  674168 oci.go:103] Successfully created a docker volume addons-162403
	I0920 18:18:51.819546  674168 cli_runner.go:164] Run: docker run --rm --name addons-162403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --entrypoint /usr/bin/test -v addons-162403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 18:18:56.747820  674168 cli_runner.go:217] Completed: docker run --rm --name addons-162403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --entrypoint /usr/bin/test -v addons-162403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (4.92822817s)
	I0920 18:18:56.747853  674168 oci.go:107] Successfully prepared a docker volume addons-162403
	I0920 18:18:56.747870  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:56.747891  674168 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 18:18:56.747948  674168 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162403:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 18:19:01.072064  674168 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162403:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.324069588s)
	I0920 18:19:01.072104  674168 kic.go:203] duration metric: took 4.324208181s to extract preloaded images to volume ...
	W0920 18:19:01.072245  674168 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 18:19:01.072342  674168 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 18:19:01.120121  674168 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-162403 --name addons-162403 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-162403 --network addons-162403 --ip 192.168.49.2 --volume addons-162403:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 18:19:01.433919  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Running}}
	I0920 18:19:01.451773  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.468968  674168 cli_runner.go:164] Run: docker exec addons-162403 stat /var/lib/dpkg/alternatives/iptables
	I0920 18:19:01.510599  674168 oci.go:144] the created container "addons-162403" has a running status.
	I0920 18:19:01.510643  674168 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa...
	I0920 18:19:01.839171  674168 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 18:19:01.868842  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.888555  674168 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 18:19:01.888581  674168 kic_runner.go:114] Args: [docker exec --privileged addons-162403 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 18:19:01.951628  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.969485  674168 machine.go:93] provisionDockerMachine start ...
	I0920 18:19:01.969572  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:01.988650  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:01.988870  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:01.988884  674168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:19:02.122640  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162403
	
	I0920 18:19:02.122671  674168 ubuntu.go:169] provisioning hostname "addons-162403"
	I0920 18:19:02.122731  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.140337  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.140537  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.140557  674168 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-162403 && echo "addons-162403" | sudo tee /etc/hostname
	I0920 18:19:02.286561  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162403
	
	I0920 18:19:02.286650  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.304306  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.304516  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.304533  674168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-162403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-162403/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-162403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:19:02.439353  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:19:02.439404  674168 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-664237/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-664237/.minikube}
	I0920 18:19:02.439441  674168 ubuntu.go:177] setting up certificates
	I0920 18:19:02.439455  674168 provision.go:84] configureAuth start
	I0920 18:19:02.439504  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:02.456858  674168 provision.go:143] copyHostCerts
	I0920 18:19:02.456941  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/ca.pem (1078 bytes)
	I0920 18:19:02.457067  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/cert.pem (1123 bytes)
	I0920 18:19:02.457128  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/key.pem (1679 bytes)
	I0920 18:19:02.457180  674168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem org=jenkins.addons-162403 san=[127.0.0.1 192.168.49.2 addons-162403 localhost minikube]
	I0920 18:19:02.568617  674168 provision.go:177] copyRemoteCerts
	I0920 18:19:02.568695  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:19:02.568736  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.586920  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:02.684045  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:19:02.707472  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:19:02.731956  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:19:02.755601  674168 provision.go:87] duration metric: took 316.131194ms to configureAuth
	I0920 18:19:02.755631  674168 ubuntu.go:193] setting minikube options for container-runtime
	I0920 18:19:02.755814  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:02.755914  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.772731  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.772918  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.772936  674168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:19:02.992259  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:19:02.992298  674168 machine.go:96] duration metric: took 1.022790809s to provisionDockerMachine
	I0920 18:19:02.992310  674168 client.go:171] duration metric: took 11.800496863s to LocalClient.Create
	I0920 18:19:02.992331  674168 start.go:167] duration metric: took 11.800557763s to libmachine.API.Create "addons-162403"
	I0920 18:19:02.992341  674168 start.go:293] postStartSetup for "addons-162403" (driver="docker")
	I0920 18:19:02.992353  674168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:19:02.992454  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:19:02.992503  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.008771  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.104327  674168 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:19:03.107709  674168 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 18:19:03.107745  674168 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 18:19:03.107753  674168 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 18:19:03.107760  674168 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 18:19:03.107771  674168 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-664237/.minikube/addons for local assets ...
	I0920 18:19:03.107836  674168 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-664237/.minikube/files for local assets ...
	I0920 18:19:03.107861  674168 start.go:296] duration metric: took 115.514633ms for postStartSetup
	I0920 18:19:03.108152  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:03.124456  674168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json ...
	I0920 18:19:03.124718  674168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:19:03.124760  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.141718  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.231925  674168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 18:19:03.236351  674168 start.go:128] duration metric: took 12.046994202s to createHost
	I0920 18:19:03.236388  674168 start.go:83] releasing machines lock for "addons-162403", held for 12.047138719s
	I0920 18:19:03.236447  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:03.252823  674168 ssh_runner.go:195] Run: cat /version.json
	I0920 18:19:03.252881  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.252896  674168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:19:03.252965  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.270590  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.270812  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.431267  674168 ssh_runner.go:195] Run: systemctl --version
	I0920 18:19:03.435427  674168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:19:03.571297  674168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:19:03.575824  674168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:19:03.593925  674168 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 18:19:03.594008  674168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:19:03.621210  674168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 18:19:03.621241  674168 start.go:495] detecting cgroup driver to use...
	I0920 18:19:03.621281  674168 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 18:19:03.621346  674168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:19:03.636176  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:19:03.646720  674168 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:19:03.646780  674168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:19:03.659269  674168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:19:03.672678  674168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:19:03.753551  674168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:19:03.832924  674168 docker.go:233] disabling docker service ...
	I0920 18:19:03.833033  674168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:19:03.850932  674168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:19:03.861851  674168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:19:03.936436  674168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:19:04.025605  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:19:04.037271  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:19:04.053234  674168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:19:04.053306  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.062992  674168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:19:04.063067  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.073077  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.082949  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.093166  674168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:19:04.102194  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.111782  674168 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.127237  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.137185  674168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:19:04.145365  674168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:19:04.153756  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:04.227978  674168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:19:04.324503  674168 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:19:04.324605  674168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:19:04.328475  674168 start.go:563] Will wait 60s for crictl version
	I0920 18:19:04.328524  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:19:04.331866  674168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:19:04.364842  674168 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 18:19:04.364939  674168 ssh_runner.go:195] Run: crio --version
	I0920 18:19:04.404023  674168 ssh_runner.go:195] Run: crio --version
	I0920 18:19:04.442587  674168 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 18:19:04.444061  674168 cli_runner.go:164] Run: docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:19:04.460165  674168 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 18:19:04.463995  674168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:19:04.474789  674168 kubeadm.go:883] updating cluster {Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:19:04.474919  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:19:04.474992  674168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:19:04.537318  674168 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:19:04.537404  674168 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:19:04.537459  674168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:19:04.571115  674168 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:19:04.571143  674168 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:19:04.571153  674168 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 18:19:04.571259  674168 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-162403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:19:04.571321  674168 ssh_runner.go:195] Run: crio config
	I0920 18:19:04.615201  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:19:04.615225  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:19:04.615237  674168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:19:04.615259  674168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-162403 NodeName:addons-162403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:19:04.615389  674168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-162403"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:19:04.615447  674168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:19:04.624504  674168 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:19:04.624568  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:19:04.633418  674168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 18:19:04.650496  674168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:19:04.667763  674168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 18:19:04.684808  674168 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 18:19:04.688259  674168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:19:04.698716  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:04.772157  674168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:19:04.785010  674168 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403 for IP: 192.168.49.2
	I0920 18:19:04.785034  674168 certs.go:194] generating shared ca certs ...
	I0920 18:19:04.785055  674168 certs.go:226] acquiring lock for ca certs: {Name:mk4b124302946da10a6534852cdb170d2c9fff4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.785184  674168 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key
	I0920 18:19:04.975314  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt ...
	I0920 18:19:04.975345  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt: {Name:mk70db283e13139496726ffe72d8d96dde32a822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.975559  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key ...
	I0920 18:19:04.975584  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key: {Name:mk35cfb4b8c77a9b5e50fcee25a6045ab52d6653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.975700  674168 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key
	I0920 18:19:05.060533  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt ...
	I0920 18:19:05.060567  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt: {Name:mk71caa95e512e49d5f0bbeb9669d49d06067538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.060774  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key ...
	I0920 18:19:05.060791  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key: {Name:mk48c17978eac1b6467fd589c3690dfaad357164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.060889  674168 certs.go:256] generating profile certs ...
	I0920 18:19:05.060964  674168 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key
	I0920 18:19:05.060984  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt with IP's: []
	I0920 18:19:05.132709  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt ...
	I0920 18:19:05.132744  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: {Name:mk43ea5dca75753d8d8a5367831467eeceb0fdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.132939  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key ...
	I0920 18:19:05.132959  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key: {Name:mk5d83dae2938d299506d1c5f284f55c2b17c66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.133062  674168 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af
	I0920 18:19:05.133090  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 18:19:05.307926  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af ...
	I0920 18:19:05.307962  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af: {Name:mkae84dcee0d54761655975153f0afe30c8c5174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.308152  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af ...
	I0920 18:19:05.308174  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af: {Name:mkf96ba0fb78917c3ee6f7335dc544ffcc5224ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.308277  674168 certs.go:381] copying /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af -> /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt
	I0920 18:19:05.308379  674168 certs.go:385] copying /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af -> /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key
	I0920 18:19:05.308461  674168 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key
	I0920 18:19:05.308486  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt with IP's: []
	I0920 18:19:05.434100  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt ...
	I0920 18:19:05.434142  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt: {Name:mk90e9baf01ada5513109eca2cf59bfe6b10cb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.434322  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key ...
	I0920 18:19:05.434336  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key: {Name:mk97b476f9ae1a8b6c97412a5ae795e7d133f43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.434511  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 18:19:05.434549  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:19:05.434571  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:19:05.434592  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem (1679 bytes)
	I0920 18:19:05.435207  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:19:05.458404  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 18:19:05.481726  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:19:05.504545  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:19:05.526862  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:19:05.548944  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:19:05.571483  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:19:05.593408  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:19:05.615754  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:19:05.638295  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:19:05.654802  674168 ssh_runner.go:195] Run: openssl version
	I0920 18:19:05.660087  674168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:19:05.669718  674168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.673149  674168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:19 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.673209  674168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.679642  674168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:19:05.689469  674168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:19:05.692656  674168 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:19:05.692709  674168 kubeadm.go:392] StartCluster: {Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:19:05.692807  674168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:19:05.692848  674168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:19:05.726380  674168 cri.go:89] found id: ""
	I0920 18:19:05.726441  674168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:19:05.734945  674168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:19:05.743371  674168 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 18:19:05.743434  674168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:19:05.751458  674168 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:19:05.751486  674168 kubeadm.go:157] found existing configuration files:
	
	I0920 18:19:05.751533  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:19:05.759587  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:19:05.759665  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:19:05.767587  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:19:05.775580  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:19:05.775632  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:19:05.783550  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:19:05.791364  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:19:05.791431  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:19:05.799115  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:19:05.806872  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:19:05.806937  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:19:05.814767  674168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 18:19:05.849981  674168 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:19:05.850038  674168 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:19:05.866359  674168 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 18:19:05.866451  674168 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0920 18:19:05.866546  674168 kubeadm.go:310] OS: Linux
	I0920 18:19:05.866606  674168 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 18:19:05.866650  674168 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 18:19:05.866698  674168 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 18:19:05.866761  674168 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 18:19:05.866832  674168 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 18:19:05.866901  674168 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 18:19:05.866960  674168 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 18:19:05.867073  674168 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 18:19:05.867141  674168 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 18:19:05.916092  674168 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:19:05.916231  674168 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:19:05.916371  674168 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:19:05.923502  674168 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:19:05.926743  674168 out.go:235]   - Generating certificates and keys ...
	I0920 18:19:05.926857  674168 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:19:05.926930  674168 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:19:06.037108  674168 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:19:06.230359  674168 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:19:06.324616  674168 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:19:06.546085  674168 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:19:06.884456  674168 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:19:06.884577  674168 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-162403 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:19:07.307543  674168 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:19:07.307735  674168 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-162403 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:19:07.569020  674168 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:19:07.702458  674168 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:19:07.850614  674168 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:19:07.850743  674168 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:19:07.903971  674168 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:19:08.053888  674168 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:19:08.422419  674168 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:19:08.545791  674168 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:19:08.627541  674168 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:19:08.627956  674168 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:19:08.631231  674168 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:19:08.633449  674168 out.go:235]   - Booting up control plane ...
	I0920 18:19:08.633578  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:19:08.633681  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:19:08.633775  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:19:08.645378  674168 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:19:08.650587  674168 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:19:08.650659  674168 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:19:08.727967  674168 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:19:08.728106  674168 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:19:09.229492  674168 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.337636ms
	I0920 18:19:09.229658  674168 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:19:13.730791  674168 kubeadm.go:310] [api-check] The API server is healthy after 4.501479968s
	I0920 18:19:13.742809  674168 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:19:13.755431  674168 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:19:13.774442  674168 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:19:13.774707  674168 kubeadm.go:310] [mark-control-plane] Marking the node addons-162403 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:19:13.782319  674168 kubeadm.go:310] [bootstrap-token] Using token: dfp0rr.g8klnxfszt90e7ou
	I0920 18:19:13.783826  674168 out.go:235]   - Configuring RBAC rules ...
	I0920 18:19:13.783941  674168 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:19:13.787166  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:19:13.793657  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:19:13.797189  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:19:13.799957  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:19:13.802629  674168 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:19:14.139197  674168 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:19:14.568490  674168 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:19:15.136897  674168 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:19:15.137714  674168 kubeadm.go:310] 
	I0920 18:19:15.137780  674168 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:19:15.137788  674168 kubeadm.go:310] 
	I0920 18:19:15.137863  674168 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:19:15.137873  674168 kubeadm.go:310] 
	I0920 18:19:15.137906  674168 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:19:15.138010  674168 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:19:15.138117  674168 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:19:15.138134  674168 kubeadm.go:310] 
	I0920 18:19:15.138208  674168 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:19:15.138217  674168 kubeadm.go:310] 
	I0920 18:19:15.138283  674168 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:19:15.138292  674168 kubeadm.go:310] 
	I0920 18:19:15.138391  674168 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:19:15.138525  674168 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:19:15.138624  674168 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:19:15.138640  674168 kubeadm.go:310] 
	I0920 18:19:15.138736  674168 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:19:15.138857  674168 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:19:15.138879  674168 kubeadm.go:310] 
	I0920 18:19:15.139024  674168 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dfp0rr.g8klnxfszt90e7ou \
	I0920 18:19:15.139190  674168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:891ba1fd40a1e235f359f18998838e7bbc84a16cf5d5bbb3fe5b65a2c5d30bae \
	I0920 18:19:15.139223  674168 kubeadm.go:310] 	--control-plane 
	I0920 18:19:15.139231  674168 kubeadm.go:310] 
	I0920 18:19:15.139332  674168 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:19:15.139342  674168 kubeadm.go:310] 
	I0920 18:19:15.139453  674168 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dfp0rr.g8klnxfszt90e7ou \
	I0920 18:19:15.139569  674168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:891ba1fd40a1e235f359f18998838e7bbc84a16cf5d5bbb3fe5b65a2c5d30bae 
	I0920 18:19:15.141419  674168 kubeadm.go:310] W0920 18:19:05.847423    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:19:15.141788  674168 kubeadm.go:310] W0920 18:19:05.848046    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:19:15.141998  674168 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0920 18:19:15.142142  674168 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:19:15.142176  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:19:15.142184  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:19:15.144217  674168 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:19:15.145705  674168 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:19:15.149559  674168 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:19:15.149575  674168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:19:15.167148  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:19:15.359568  674168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:19:15.359642  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:15.359669  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-162403 minikube.k8s.io/updated_at=2024_09_20T18_19_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-162403 minikube.k8s.io/primary=true
	I0920 18:19:15.367240  674168 ops.go:34] apiserver oom_adj: -16
	I0920 18:19:15.462349  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:15.963384  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:16.462821  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:16.962540  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:17.463154  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:17.962489  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:18.463105  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:18.962640  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:19.463445  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:19.546496  674168 kubeadm.go:1113] duration metric: took 4.186919442s to wait for elevateKubeSystemPrivileges
	I0920 18:19:19.546589  674168 kubeadm.go:394] duration metric: took 13.853885644s to StartCluster
	I0920 18:19:19.546618  674168 settings.go:142] acquiring lock: {Name:mk3858ba4d2318954bc9bdba2ebdd7d07c1af964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:19.546761  674168 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:19:19.547278  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/kubeconfig: {Name:mk211a7242c57e0384e62621e3b0b410c7b81ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:19.547568  674168 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:19:19.547588  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:19:19.547603  674168 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:19:19.547727  674168 addons.go:69] Setting cloud-spanner=true in profile "addons-162403"
	I0920 18:19:19.547739  674168 addons.go:69] Setting yakd=true in profile "addons-162403"
	I0920 18:19:19.547755  674168 addons.go:234] Setting addon cloud-spanner=true in "addons-162403"
	I0920 18:19:19.547765  674168 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-162403"
	I0920 18:19:19.547780  674168 addons.go:69] Setting metrics-server=true in profile "addons-162403"
	I0920 18:19:19.547793  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547804  674168 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-162403"
	I0920 18:19:19.547813  674168 addons.go:234] Setting addon metrics-server=true in "addons-162403"
	I0920 18:19:19.547819  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:19.547838  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547843  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547881  674168 addons.go:69] Setting storage-provisioner=true in profile "addons-162403"
	I0920 18:19:19.547898  674168 addons.go:234] Setting addon storage-provisioner=true in "addons-162403"
	I0920 18:19:19.547923  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.548013  674168 addons.go:69] Setting ingress=true in profile "addons-162403"
	I0920 18:19:19.548033  674168 addons.go:234] Setting addon ingress=true in "addons-162403"
	I0920 18:19:19.548078  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.548348  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548368  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548372  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548394  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548471  674168 addons.go:69] Setting default-storageclass=true in profile "addons-162403"
	I0920 18:19:19.548500  674168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-162403"
	I0920 18:19:19.548533  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548792  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.549033  674168 addons.go:69] Setting registry=true in profile "addons-162403"
	I0920 18:19:19.549061  674168 addons.go:234] Setting addon registry=true in "addons-162403"
	I0920 18:19:19.549095  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547756  674168 addons.go:234] Setting addon yakd=true in "addons-162403"
	I0920 18:19:19.549524  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.549550  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.549933  674168 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-162403"
	I0920 18:19:19.549957  674168 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-162403"
	I0920 18:19:19.550006  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550197  674168 addons.go:69] Setting ingress-dns=true in profile "addons-162403"
	I0920 18:19:19.550213  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550225  674168 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-162403"
	I0920 18:19:19.550238  674168 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-162403"
	I0920 18:19:19.550263  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.551201  674168 addons.go:69] Setting gcp-auth=true in profile "addons-162403"
	I0920 18:19:19.554213  674168 addons.go:69] Setting inspektor-gadget=true in profile "addons-162403"
	I0920 18:19:19.554281  674168 addons.go:69] Setting volcano=true in profile "addons-162403"
	I0920 18:19:19.554302  674168 addons.go:234] Setting addon volcano=true in "addons-162403"
	I0920 18:19:19.551386  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550214  674168 addons.go:234] Setting addon ingress-dns=true in "addons-162403"
	I0920 18:19:19.554827  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.554304  674168 addons.go:234] Setting addon inspektor-gadget=true in "addons-162403"
	I0920 18:19:19.555122  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.555478  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.555674  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.554221  674168 mustload.go:65] Loading cluster: addons-162403
	I0920 18:19:19.554183  674168 out.go:177] * Verifying Kubernetes components...
	I0920 18:19:19.556337  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:19.556799  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.554271  674168 addons.go:69] Setting volumesnapshots=true in profile "addons-162403"
	I0920 18:19:19.557261  674168 addons.go:234] Setting addon volumesnapshots=true in "addons-162403"
	I0920 18:19:19.557308  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.559052  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:19.569182  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.588210  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.588739  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.588904  674168 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:19:19.588992  674168 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:19:19.590309  674168 addons.go:234] Setting addon default-storageclass=true in "addons-162403"
	I0920 18:19:19.590370  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.590786  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:19:19.590802  674168 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:19:19.590864  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.590961  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.591935  674168 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:19:19.593751  674168 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:19:19.593775  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:19:19.593828  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.601351  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:19:19.601355  674168 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 18:19:19.601442  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:19.603687  674168 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:19:19.603717  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:19:19.603786  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.604025  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:19.608296  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:19:19.609371  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:19:19.610117  674168 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:19:19.610142  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:19:19.610211  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.612872  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:19:19.614205  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:19:19.615649  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:19:19.616930  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:19:19.618228  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:19:19.618357  674168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:19:19.619747  674168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:19:19.619771  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:19:19.619845  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.620114  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:19:19.624754  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:19:19.624710  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:19:19.624879  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:19:19.624952  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.628419  674168 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:19:19.628839  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:19:19.628880  674168 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:19:19.628974  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.629898  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:19:19.629920  674168 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:19:19.629986  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.635925  674168 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:19:19.635951  674168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:19:19.636128  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.638673  674168 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:19:19.638818  674168 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:19:19.641476  674168 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:19:19.641507  674168 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:19:19.641586  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.641902  674168 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:19:19.641918  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:19:19.641968  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.644063  674168 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:19:19.647042  674168 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:19:19.647066  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:19:19.647131  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.651090  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.672918  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.673246  674168 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-162403"
	I0920 18:19:19.673285  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.673746  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	W0920 18:19:19.674079  674168 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:19:19.680928  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.692356  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.699068  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.703084  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.708959  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.709724  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.710034  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.710800  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.716097  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.718252  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.725687  674168 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:19:19.727095  674168 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:19:19.728444  674168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:19:19.728469  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:19:19.728535  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.728936  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.756378  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.851667  674168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:19:19.851869  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:19:19.958165  674168 node_ready.go:35] waiting up to 6m0s for node "addons-162403" to be "Ready" ...
	I0920 18:19:20.049122  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:19:20.059225  674168 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:19:20.059328  674168 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:19:20.143656  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:19:20.143697  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:19:20.162533  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:19:20.248915  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:19:20.252979  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:19:20.253073  674168 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:19:20.253373  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:19:20.255477  674168 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:19:20.255545  674168 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:19:20.344657  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:19:20.344752  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:19:20.344997  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:19:20.347913  674168 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:19:20.347984  674168 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:19:20.361494  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:19:20.361598  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:19:20.443778  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:19:20.460111  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:19:20.460213  674168 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:19:20.466113  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:19:20.556027  674168 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:19:20.556125  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:19:20.562330  674168 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:19:20.562372  674168 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:19:20.644614  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:19:20.644712  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:19:20.645083  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:19:20.645155  674168 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:19:20.743572  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:19:20.743665  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:19:20.843761  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:19:20.863489  674168 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:19:20.863586  674168 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:19:20.866991  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:19:20.867029  674168 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:19:20.957725  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:19:20.957824  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:19:21.051014  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:19:21.051107  674168 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:19:21.146711  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:21.146794  674168 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:19:21.244660  674168 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:19:21.244769  674168 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:19:21.345912  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:19:21.345949  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:19:21.353497  674168 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:21.353530  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:19:21.443980  674168 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.592066127s)
	I0920 18:19:21.444142  674168 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 18:19:21.446954  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:19:21.447049  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:19:21.451328  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:21.556343  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:19:21.567862  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:21.643571  674168 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:19:21.643834  674168 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:19:21.857128  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:19:21.857204  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:19:21.970271  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:22.055373  674168 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-162403" context rescaled to 1 replicas
	I0920 18:19:22.254875  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:19:22.255007  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:19:22.351603  674168 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:19:22.351644  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:19:22.745266  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:19:22.745357  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:19:22.950177  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:19:22.950262  674168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:19:22.950772  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:19:22.958386  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.909152791s)
	I0920 18:19:23.143977  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:19:23.144014  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:19:23.344840  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:19:23.344947  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:19:23.463128  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:19:23.463229  674168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:19:23.654193  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:19:23.862111  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.699531854s)
	I0920 18:19:24.153748  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:25.659918  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.410895641s)
	I0920 18:19:25.659961  674168 addons.go:475] Verifying addon ingress=true in "addons-162403"
	I0920 18:19:25.659999  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.406542284s)
	I0920 18:19:25.660093  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.315030192s)
	I0920 18:19:25.660129  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.216231279s)
	I0920 18:19:25.660205  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.193997113s)
	I0920 18:19:25.660276  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.816413561s)
	I0920 18:19:25.660308  674168 addons.go:475] Verifying addon registry=true in "addons-162403"
	I0920 18:19:25.660382  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.208965825s)
	I0920 18:19:25.660442  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.104058168s)
	I0920 18:19:25.660445  674168 addons.go:475] Verifying addon metrics-server=true in "addons-162403"
	I0920 18:19:25.661699  674168 out.go:177] * Verifying registry addon...
	I0920 18:19:25.661755  674168 out.go:177] * Verifying ingress addon...
	I0920 18:19:25.661868  674168 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-162403 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:19:25.663738  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:19:25.664391  674168 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0920 18:19:25.668639  674168 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:19:25.668854  674168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:19:25.668871  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:25.768664  674168 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:19:25.768694  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.168189  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:26.168647  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.244777  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.676860398s)
	W0920 18:19:26.244890  674168 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:19:26.244939  674168 retry.go:31] will retry after 349.249211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:19:26.244988  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.294091803s)
	I0920 18:19:26.461459  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:26.574707  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.92045562s)
	I0920 18:19:26.574757  674168 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-162403"
	I0920 18:19:26.577367  674168 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:19:26.579563  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:19:26.582943  674168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:19:26.582960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:26.594681  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:26.683334  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:26.683674  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.858359  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:19:26.858435  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:26.875902  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:26.984458  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:19:27.001107  674168 addons.go:234] Setting addon gcp-auth=true in "addons-162403"
	I0920 18:19:27.001163  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:27.001520  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:27.018107  674168 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:19:27.018153  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:27.035342  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:27.083631  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:27.166744  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:27.168128  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:27.646290  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:27.669072  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:27.669418  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.084361  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:28.166640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.168138  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:28.462238  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:28.583099  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:28.667640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.667978  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:29.084266  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:29.167817  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:29.168604  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:29.271367  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.676631111s)
	I0920 18:19:29.271432  674168 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.253291372s)
	I0920 18:19:29.273273  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:29.274673  674168 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:19:29.276361  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:19:29.276382  674168 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:19:29.294783  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:19:29.294816  674168 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:19:29.345482  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:19:29.345506  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:19:29.363625  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:19:29.583445  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:29.667504  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:29.668067  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.065330  674168 addons.go:475] Verifying addon gcp-auth=true in "addons-162403"
	I0920 18:19:30.067623  674168 out.go:177] * Verifying gcp-auth addon...
	I0920 18:19:30.070321  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:19:30.073240  674168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:19:30.073265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:30.083449  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:30.167256  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:30.168040  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.574216  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:30.583194  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:30.667733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:30.668045  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.961659  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:31.073149  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:31.082855  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:31.168115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:31.168666  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:31.573991  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:31.582620  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:31.667824  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:31.668352  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:32.073266  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:32.082897  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:32.167779  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:32.168380  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:32.574170  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:32.582879  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:32.667250  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:32.667809  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:33.074390  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:33.083130  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:33.168052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:33.168329  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:33.461572  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:33.574511  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:33.582999  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:33.667656  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:33.668054  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:34.073228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:34.082952  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:34.168374  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:34.169326  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:34.573898  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:34.583235  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:34.666598  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:34.667851  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.074529  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:35.083233  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:35.166658  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:35.167884  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.573980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:35.582504  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:35.667399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:35.667855  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.960967  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:36.073874  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:36.083242  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:36.166883  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:36.168404  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:36.574240  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:36.582733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:36.667467  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:36.667953  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.073902  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:37.082616  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:37.167641  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:37.167921  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.573766  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:37.583480  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:37.666947  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:37.667458  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.961890  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:38.073945  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:38.082640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:38.167284  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:38.167840  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:38.574639  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:38.583506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:38.667337  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:38.667789  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:39.073649  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:39.084058  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:39.167781  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:39.168107  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:39.574163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:39.583050  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:39.666763  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:39.668155  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:40.073200  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:40.082825  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:40.167592  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:40.168195  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:40.461680  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:40.573622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:40.583124  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:40.666705  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:40.667590  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:41.073798  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:41.083878  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:41.167259  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:41.167696  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:41.573769  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:41.583407  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:41.667187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:41.667621  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.073956  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:42.082469  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:42.167268  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:42.167773  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.573883  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:42.582802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:42.667181  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:42.667648  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.960976  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:43.073526  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:43.083195  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:43.167541  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:43.168076  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:43.574500  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:43.583094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:43.667526  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:43.667955  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.073938  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:44.082232  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:44.167119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:44.168254  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.573757  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:44.583299  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:44.666525  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:44.668092  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.961566  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:45.074296  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:45.083265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:45.166731  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:45.167803  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:45.573582  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:45.583070  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:45.666718  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:45.667763  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:46.074393  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:46.083026  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:46.167896  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:46.168469  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:46.573951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:46.582611  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:46.667417  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:46.667835  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:47.074391  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:47.083342  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:47.167582  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:47.168016  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:47.461559  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:47.573674  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:47.583550  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:47.667101  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:47.668093  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:48.074385  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:48.083357  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:48.166820  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:48.168052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:48.574056  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:48.583138  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:48.667700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:48.668170  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.073954  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:49.082550  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:49.167253  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:49.167689  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.573924  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:49.582493  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:49.667268  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:49.667713  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.961127  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:50.074222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:50.082751  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:50.167446  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:50.167837  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:50.573975  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:50.582446  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:50.667144  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:50.667725  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.073776  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:51.083555  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:51.167603  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:51.168082  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.573207  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:51.582872  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:51.667933  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:51.668639  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.961792  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:52.073650  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:52.083774  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:52.167240  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:52.167803  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:52.574175  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:52.583088  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:52.667593  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:52.668073  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:53.074115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:53.082843  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:53.167552  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:53.168250  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:53.574203  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:53.583096  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:53.666775  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:53.668043  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:54.073577  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:54.083165  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:54.166822  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:54.168120  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:54.461639  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:54.573485  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:54.583094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:54.667881  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:54.668272  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:55.074459  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:55.083676  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:55.167036  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:55.168063  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:55.574347  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:55.583185  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:55.666614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:55.668023  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.074436  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:56.083017  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:56.167739  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:56.168067  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.574141  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:56.582595  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:56.667193  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:56.667702  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.961306  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:57.073951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:57.082426  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:57.167036  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:57.167619  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:57.574066  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:57.582553  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:57.667363  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:57.667862  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:58.074286  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:58.083053  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:58.168080  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:58.168562  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:58.574033  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:58.582834  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:58.667744  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:58.667977  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:59.074041  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:59.084503  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:59.167532  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:59.167866  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:59.461351  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:59.574055  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:59.582662  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:59.667606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:59.668345  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:00.074001  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:00.082537  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:00.167389  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:00.167781  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:00.573646  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:00.583513  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:00.667237  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:00.667751  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.074614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:01.083606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:01.167425  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:01.167849  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.574159  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:01.582763  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:01.667525  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:01.667967  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.961782  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:20:02.073687  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:02.083273  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:02.167793  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:02.168126  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:02.573951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:02.582489  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:02.667286  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:02.667673  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.074061  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:03.083043  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:03.167741  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:03.168186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.574298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:03.583319  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:03.667171  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:03.667926  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.963598  674168 node_ready.go:49] node "addons-162403" has status "Ready":"True"
	I0920 18:20:03.963697  674168 node_ready.go:38] duration metric: took 44.005491387s for node "addons-162403" to be "Ready" ...
	I0920 18:20:03.963739  674168 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:20:03.975991  674168 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:04.073640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:04.083934  674168 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:20:04.083964  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:04.166878  674168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:20:04.166911  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:04.168046  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:04.574414  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:04.584293  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:04.668383  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:04.668692  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.077146  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:05.176605  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:05.176677  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.176971  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:05.574207  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:05.583569  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:05.668257  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:05.668609  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.982730  674168 pod_ready.go:93] pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.982753  674168 pod_ready.go:82] duration metric: took 2.006720801s for pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.982772  674168 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.987525  674168 pod_ready.go:93] pod "etcd-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.987550  674168 pod_ready.go:82] duration metric: took 4.771792ms for pod "etcd-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.987564  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.992095  674168 pod_ready.go:93] pod "kube-apiserver-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.992119  674168 pod_ready.go:82] duration metric: took 4.547516ms for pod "kube-apiserver-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.992133  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.996705  674168 pod_ready.go:93] pod "kube-controller-manager-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.996728  674168 pod_ready.go:82] duration metric: took 4.58678ms for pod "kube-controller-manager-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.996742  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dd8cb" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.001096  674168 pod_ready.go:93] pod "kube-proxy-dd8cb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:06.001119  674168 pod_ready.go:82] duration metric: took 4.367688ms for pod "kube-proxy-dd8cb" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.001128  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.074611  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:06.084485  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:06.167894  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:06.168247  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:06.380446  674168 pod_ready.go:93] pod "kube-scheduler-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:06.380470  674168 pod_ready.go:82] duration metric: took 379.335122ms for pod "kube-scheduler-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.380483  674168 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.573654  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:06.583209  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:06.669465  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:06.669865  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:07.074546  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:07.146700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:07.168630  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:07.168936  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:07.574572  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:07.646002  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:07.668560  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:07.669087  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:08.074484  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:08.147135  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:08.168492  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:08.169815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:08.387061  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.573949  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:08.583549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:08.668848  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:08.669952  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:09.075164  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:09.085141  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:09.168450  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:09.168903  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:09.573956  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:09.584733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:09.668231  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:09.668811  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.074046  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:10.084317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:10.167605  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:10.168539  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.573990  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:10.584073  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:10.668505  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:10.668657  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.886466  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:11.074057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:11.083511  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:11.168156  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:11.168499  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:11.574454  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:11.584057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:11.667749  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:11.668163  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:12.074025  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:12.083478  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:12.167917  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:12.168149  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:12.573943  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:12.583638  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:12.667916  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:12.668188  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.074028  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:13.084332  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:13.167761  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:13.168109  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.385693  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.574062  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:13.675513  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:13.675988  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.676028  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.074341  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:14.083682  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.167388  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:14.168157  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:14.574641  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:14.584170  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.667163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:14.668186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:15.074157  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:15.083952  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:15.167738  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:15.168230  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:15.386551  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.573791  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:15.583941  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:15.667622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:15.667966  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:16.074020  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:16.083830  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:16.167948  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:16.168175  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:16.574271  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:16.583559  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:16.668115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:16.668332  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.074273  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:17.083969  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:17.167218  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:17.168238  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.574490  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:17.584137  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:17.667428  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:17.667780  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.886239  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:18.074428  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:18.084227  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:18.167720  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:18.168760  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:18.574681  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:18.583878  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:18.667539  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:18.668689  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.074506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:19.085322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:19.167619  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:19.168781  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.574399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:19.584366  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:19.668321  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:19.669055  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.886419  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.074661  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:20.084728  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:20.170023  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:20.170213  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:20.574364  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:20.583499  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:20.667708  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:20.668118  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:21.074066  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:21.085062  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:21.167396  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:21.167749  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:21.573957  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:21.583844  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:21.675451  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:21.675661  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:22.073998  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:22.083732  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:22.169529  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:22.170522  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:22.386803  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.573870  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:22.584705  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:22.667943  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:22.668186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:23.074421  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:23.175976  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:23.176483  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:23.176697  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:23.575070  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:23.584072  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:23.667372  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:23.668676  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.074257  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:24.083644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:24.168228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:24.168815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.574187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:24.583351  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:24.667456  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:24.668620  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.886478  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.073866  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:25.084524  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:25.168018  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:25.168513  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:25.574841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:25.584539  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:25.667916  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:25.668455  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:26.074005  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:26.084351  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:26.167815  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:26.168130  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:26.573373  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:26.583700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:26.667912  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:26.668223  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:27.075963  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:27.084215  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:27.167448  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:27.168236  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:27.385536  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.574802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:27.584026  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:27.667459  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:27.667865  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:28.074427  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:28.083549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:28.168099  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:28.168307  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:28.573283  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:28.583651  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:28.669993  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:28.670558  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:29.074299  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:29.083891  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:29.167292  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:29.168790  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:29.386904  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.574248  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:29.584292  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:29.667547  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:29.668470  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:30.073583  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:30.084840  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:30.168291  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:30.168832  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:30.573644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:30.583792  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:30.667979  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:30.668523  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.073798  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:31.088101  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:31.167412  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:31.168798  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.574592  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:31.584104  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:31.676242  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:31.676685  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.886012  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.074267  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:32.083949  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:32.167984  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:32.168035  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:32.573758  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:32.584399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:32.667787  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:32.668680  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.073761  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:33.084622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:33.168481  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:33.169015  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.574492  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:33.584349  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:33.668163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:33.668466  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.886108  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.074298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:34.090815  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:34.168228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:34.168607  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:34.574304  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:34.583500  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:34.667921  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:34.668346  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:35.074222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:35.083544  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:35.168115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:35.168346  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:35.574453  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:35.583475  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:35.668056  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:35.668420  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:36.074656  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:36.084839  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:36.175775  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:36.176052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:36.385161  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.573863  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:36.583168  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:36.667584  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:36.667932  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:37.074532  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:37.084050  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:37.167729  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:37.168857  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:37.575013  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:37.584903  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:37.667711  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:37.670115  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.148918  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:38.150092  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:38.170322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:38.171681  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.449562  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.647846  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:38.650638  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:38.671119  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.671851  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.073841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:39.084303  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:39.168201  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:39.168689  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.574832  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:39.584265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:39.668057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.668652  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.075222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:40.084398  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:40.169659  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:40.169875  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.573922  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:40.585047  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:40.667391  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:40.668328  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.885859  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.074071  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:41.084506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:41.167576  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:41.168542  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:41.574344  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:41.584143  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:41.667456  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:41.669612  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:42.074595  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:42.086313  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:42.167749  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:42.168802  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:42.574390  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:42.584540  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:42.668039  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:42.668168  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:43.074796  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:43.084081  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:43.175684  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:43.176316  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:43.387608  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.574180  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:43.583921  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:43.668317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:43.668557  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:44.074438  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:44.083995  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:44.175579  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:44.175990  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:44.574794  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:44.584211  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:44.667783  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:44.668012  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.075097  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:45.083848  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:45.167219  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:45.168396  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.574035  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:45.583614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:45.667959  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:45.668489  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.886260  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.074149  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:46.084051  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:46.168119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:46.168348  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:46.574489  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:46.583340  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:46.667980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:46.668074  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.073991  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:47.084011  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:47.167606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:47.167975  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.574409  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:47.584322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:47.667960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:47.668234  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.887147  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.074367  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:48.083559  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:48.168314  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:48.168688  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:48.574112  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:48.583378  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:48.667783  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:48.668071  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.074306  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:49.084220  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:49.167938  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:49.168189  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.574906  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:49.583879  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:49.667488  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:49.667893  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.887236  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.073693  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:50.084184  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:50.167541  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:50.168046  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:50.573701  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:50.584183  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:50.667813  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:50.668089  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.074194  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:51.083534  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:51.168108  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:51.168510  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.574767  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:51.584409  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:51.667685  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:51.668584  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.887461  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.074272  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:52.084298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:52.167622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:52.168343  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:52.574802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:52.585518  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:52.667629  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:52.668294  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:53.074044  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:53.085119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:53.167794  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:53.167902  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:53.574468  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:53.584721  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:53.668152  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:53.668429  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:54.074187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:54.083549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:54.167885  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:54.168463  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:54.386319  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.574862  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:54.584077  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:54.667752  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:54.668059  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:55.074806  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:55.083967  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:55.167246  674168 kapi.go:107] duration metric: took 1m29.503507069s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:20:55.168254  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:55.573690  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:55.584989  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:55.669563  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:56.159319  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:56.159900  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:56.244905  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:56.449078  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:56.574644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:56.584810  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:56.668815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:57.151274  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:57.151865  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:57.245823  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:57.648547  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:57.650051  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:57.747751  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.147934  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:58.148674  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:58.170132  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.573817  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:58.585119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:58.668821  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.886841  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.074016  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:59.083075  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:59.169176  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:59.573960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:59.586741  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:59.669373  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:00.074322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:00.084055  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:00.168452  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:00.573877  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:00.584075  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:00.669220  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:01.074453  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:01.084094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:01.169161  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:01.386983  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.574518  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:01.584431  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:01.668575  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:02.074725  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:02.084554  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:02.169021  674168 kapi.go:107] duration metric: took 1m36.504626828s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:21:02.573607  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:02.584400  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:03.074502  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:03.084128  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:03.387306  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.574624  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:03.583947  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:04.074010  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:04.085435  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:04.574841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:04.584904  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:05.074160  674168 kapi.go:107] duration metric: took 1m35.003835312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:21:05.076015  674168 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-162403 cluster.
	I0920 18:21:05.077316  674168 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:21:05.078763  674168 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:21:05.085221  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:05.387394  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:05.584431  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:06.084576  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:06.646888  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.085163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.584837  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.887115  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.146524  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:08.584317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:09.083918  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:09.584467  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:10.083578  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:10.386767  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.585465  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:11.084980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:11.585791  674168 kapi.go:107] duration metric: took 1m45.006228088s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:21:11.587570  674168 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0920 18:21:11.588892  674168 addons.go:510] duration metric: took 1m52.041283386s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0920 18:21:12.886529  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:14.886947  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.386798  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.886426  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.387024  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.886306  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.886543  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.887497  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.386454  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.886042  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.886898  674168 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"True"
	I0920 18:21:34.886922  674168 pod_ready.go:82] duration metric: took 1m28.50643262s for pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.886933  674168 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.891249  674168 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace has status "Ready":"True"
	I0920 18:21:34.891272  674168 pod_ready.go:82] duration metric: took 4.331899ms for pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.891290  674168 pod_ready.go:39] duration metric: took 1m30.927531806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:21:34.891322  674168 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:21:34.891383  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.891454  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.925385  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:34.925415  674168 cri.go:89] found id: ""
	I0920 18:21:34.925427  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:34.925481  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:34.928881  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.928961  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.961773  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:34.961796  674168 cri.go:89] found id: ""
	I0920 18:21:34.961806  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:34.961860  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:34.965452  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.965512  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.997902  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:34.997922  674168 cri.go:89] found id: ""
	I0920 18:21:34.997930  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:34.997971  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.001467  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:35.001538  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:35.033709  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:35.033737  674168 cri.go:89] found id: ""
	I0920 18:21:35.033747  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:35.033796  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.037117  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:35.037188  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:35.070146  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:35.070171  674168 cri.go:89] found id: ""
	I0920 18:21:35.070180  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:35.070232  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.073666  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:35.073742  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:35.106480  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:35.106505  674168 cri.go:89] found id: ""
	I0920 18:21:35.106515  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:35.106579  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.109930  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:35.110001  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:35.143353  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:35.143373  674168 cri.go:89] found id: ""
	I0920 18:21:35.143382  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:35.143450  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.147158  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:35.147183  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:35.186573  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:35.186608  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:35.219833  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:35.219859  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:35.296767  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:35.296802  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:35.374733  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:35.374783  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:35.397401  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:35.397441  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:35.439718  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:35.439747  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:35.481086  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:35.481119  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:35.515899  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:35.515944  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:35.614907  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:35.614941  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:35.669956  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:35.669994  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:35.705242  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:35.705275  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:38.247127  674168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:38.261085  674168 api_server.go:72] duration metric: took 2m18.713476022s to wait for apiserver process to appear ...
	I0920 18:21:38.261112  674168 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:21:38.261153  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:38.261198  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:38.294652  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:38.294675  674168 cri.go:89] found id: ""
	I0920 18:21:38.294683  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:38.294728  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.297926  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:38.298005  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:38.330857  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:38.330877  674168 cri.go:89] found id: ""
	I0920 18:21:38.330887  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:38.330948  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.334140  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:38.334194  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:38.367218  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:38.367245  674168 cri.go:89] found id: ""
	I0920 18:21:38.367252  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:38.367293  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.370531  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:38.370590  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:38.403339  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:38.403370  674168 cri.go:89] found id: ""
	I0920 18:21:38.403378  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:38.403433  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.406801  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:38.406872  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:38.439882  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:38.439903  674168 cri.go:89] found id: ""
	I0920 18:21:38.439912  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:38.439969  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.443320  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:38.443402  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:38.476678  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:38.476703  674168 cri.go:89] found id: ""
	I0920 18:21:38.476712  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:38.476769  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.479997  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:38.480061  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:38.515213  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:38.515238  674168 cri.go:89] found id: ""
	I0920 18:21:38.515246  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:38.515302  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.518573  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:38.518593  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:38.574209  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:38.574251  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:38.652350  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:38.652388  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:38.674362  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:38.674398  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:38.718009  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:38.718043  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:38.759722  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:38.759754  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:38.796446  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:38.796475  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:38.840305  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:38.840344  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:38.940656  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:38.940691  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:38.974579  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:38.974605  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:39.009360  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:39.009388  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:39.081734  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:39.081781  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:41.622849  674168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 18:21:41.627422  674168 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 18:21:41.628424  674168 api_server.go:141] control plane version: v1.31.1
	I0920 18:21:41.628450  674168 api_server.go:131] duration metric: took 3.367330033s to wait for apiserver health ...
	I0920 18:21:41.628460  674168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:21:41.628488  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:41.628545  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:41.661458  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:41.661477  674168 cri.go:89] found id: ""
	I0920 18:21:41.661485  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:41.661531  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.664866  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:41.664947  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:41.699349  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:41.699374  674168 cri.go:89] found id: ""
	I0920 18:21:41.699391  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:41.699448  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.702834  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:41.702894  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:41.736614  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:41.736638  674168 cri.go:89] found id: ""
	I0920 18:21:41.736648  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:41.736696  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.740481  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:41.740540  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:41.775612  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:41.775636  674168 cri.go:89] found id: ""
	I0920 18:21:41.775644  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:41.775692  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.779048  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:41.779108  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:41.811224  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:41.811253  674168 cri.go:89] found id: ""
	I0920 18:21:41.811261  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:41.811313  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.814683  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:41.814756  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:41.847730  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:41.847751  674168 cri.go:89] found id: ""
	I0920 18:21:41.847761  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:41.847811  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.851164  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:41.851221  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:41.885935  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:41.885956  674168 cri.go:89] found id: ""
	I0920 18:21:41.885964  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:41.886013  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.889575  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:41.889598  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:41.924023  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:41.924054  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:41.957638  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:41.957665  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:42.013803  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:42.013840  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:42.052343  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:42.052375  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:42.135981  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:42.136020  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:42.164238  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:42.164272  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:42.365506  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:42.365547  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:42.460595  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:42.460631  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:42.502829  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:42.502868  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:42.557032  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:42.557069  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:42.629398  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:42.629442  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:45.182962  674168 system_pods.go:59] 18 kube-system pods found
	I0920 18:21:45.183040  674168 system_pods.go:61] "coredns-7c65d6cfc9-24mgs" [ec3e74ab-0ca2-4944-a0ba-ab3e2e552a1f] Running
	I0920 18:21:45.183051  674168 system_pods.go:61] "csi-hostpath-attacher-0" [057910a4-ea07-40ab-9129-a3c79903a5f9] Running
	I0920 18:21:45.183057  674168 system_pods.go:61] "csi-hostpath-resizer-0" [2a6e070f-8f67-46ad-8e2e-e738b9224362] Running
	I0920 18:21:45.183062  674168 system_pods.go:61] "csi-hostpathplugin-hgq4x" [d78d2043-38be-4774-a4e1-8f366b694e3f] Running
	I0920 18:21:45.183069  674168 system_pods.go:61] "etcd-addons-162403" [cd967cd6-498a-436c-8ebf-10e541085240] Running
	I0920 18:21:45.183078  674168 system_pods.go:61] "kindnet-j7fr4" [300d7753-4ee6-44db-818d-fdb1f602488b] Running
	I0920 18:21:45.183085  674168 system_pods.go:61] "kube-apiserver-addons-162403" [057055c6-3f96-4763-b006-b61092360aef] Running
	I0920 18:21:45.183094  674168 system_pods.go:61] "kube-controller-manager-addons-162403" [84fb95f0-0529-4bd3-8dd5-457189ef56cc] Running
	I0920 18:21:45.183101  674168 system_pods.go:61] "kube-ingress-dns-minikube" [254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5] Running
	I0920 18:21:45.183110  674168 system_pods.go:61] "kube-proxy-dd8cb" [3cac319c-9057-4e29-ae2c-fb7870227b4b] Running
	I0920 18:21:45.183116  674168 system_pods.go:61] "kube-scheduler-addons-162403" [aa393b8a-49e5-4aba-bb03-3843d62ed2d2] Running
	I0920 18:21:45.183122  674168 system_pods.go:61] "metrics-server-84c5f94fbc-gr2ct" [aadc0160-94e3-4273-9d42-d0552af7ad61] Running
	I0920 18:21:45.183129  674168 system_pods.go:61] "nvidia-device-plugin-daemonset-vkrvk" [e7dcaefe-b427-4947-b9f7-651ee1b219f8] Running
	I0920 18:21:45.183137  674168 system_pods.go:61] "registry-66c9cd494c-b4j85" [88d02c55-38b5-4e2b-9986-5f7887226e63] Running
	I0920 18:21:45.183144  674168 system_pods.go:61] "registry-proxy-x8xl5" [22fc174a-6a59-45df-b8e0-fd97f697901c] Running
	I0920 18:21:45.183152  674168 system_pods.go:61] "snapshot-controller-56fcc65765-pdqqq" [a8386b62-336b-4071-af36-a2737b7f6933] Running
	I0920 18:21:45.183158  674168 system_pods.go:61] "snapshot-controller-56fcc65765-qx6cd" [369755b7-0a45-437e-93c3-c52c7bc63bfd] Running
	I0920 18:21:45.183165  674168 system_pods.go:61] "storage-provisioner" [f20bb24d-0c61-4464-93b8-2f32abbe2465] Running
	I0920 18:21:45.183175  674168 system_pods.go:74] duration metric: took 3.554706193s to wait for pod list to return data ...
	I0920 18:21:45.183191  674168 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:21:45.185616  674168 default_sa.go:45] found service account: "default"
	I0920 18:21:45.185637  674168 default_sa.go:55] duration metric: took 2.436616ms for default service account to be created ...
	I0920 18:21:45.185645  674168 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:21:45.193659  674168 system_pods.go:86] 18 kube-system pods found
	I0920 18:21:45.193684  674168 system_pods.go:89] "coredns-7c65d6cfc9-24mgs" [ec3e74ab-0ca2-4944-a0ba-ab3e2e552a1f] Running
	I0920 18:21:45.193693  674168 system_pods.go:89] "csi-hostpath-attacher-0" [057910a4-ea07-40ab-9129-a3c79903a5f9] Running
	I0920 18:21:45.193697  674168 system_pods.go:89] "csi-hostpath-resizer-0" [2a6e070f-8f67-46ad-8e2e-e738b9224362] Running
	I0920 18:21:45.193700  674168 system_pods.go:89] "csi-hostpathplugin-hgq4x" [d78d2043-38be-4774-a4e1-8f366b694e3f] Running
	I0920 18:21:45.193704  674168 system_pods.go:89] "etcd-addons-162403" [cd967cd6-498a-436c-8ebf-10e541085240] Running
	I0920 18:21:45.193708  674168 system_pods.go:89] "kindnet-j7fr4" [300d7753-4ee6-44db-818d-fdb1f602488b] Running
	I0920 18:21:45.193712  674168 system_pods.go:89] "kube-apiserver-addons-162403" [057055c6-3f96-4763-b006-b61092360aef] Running
	I0920 18:21:45.193715  674168 system_pods.go:89] "kube-controller-manager-addons-162403" [84fb95f0-0529-4bd3-8dd5-457189ef56cc] Running
	I0920 18:21:45.193719  674168 system_pods.go:89] "kube-ingress-dns-minikube" [254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5] Running
	I0920 18:21:45.193723  674168 system_pods.go:89] "kube-proxy-dd8cb" [3cac319c-9057-4e29-ae2c-fb7870227b4b] Running
	I0920 18:21:45.193726  674168 system_pods.go:89] "kube-scheduler-addons-162403" [aa393b8a-49e5-4aba-bb03-3843d62ed2d2] Running
	I0920 18:21:45.193730  674168 system_pods.go:89] "metrics-server-84c5f94fbc-gr2ct" [aadc0160-94e3-4273-9d42-d0552af7ad61] Running
	I0920 18:21:45.193733  674168 system_pods.go:89] "nvidia-device-plugin-daemonset-vkrvk" [e7dcaefe-b427-4947-b9f7-651ee1b219f8] Running
	I0920 18:21:45.193737  674168 system_pods.go:89] "registry-66c9cd494c-b4j85" [88d02c55-38b5-4e2b-9986-5f7887226e63] Running
	I0920 18:21:45.193741  674168 system_pods.go:89] "registry-proxy-x8xl5" [22fc174a-6a59-45df-b8e0-fd97f697901c] Running
	I0920 18:21:45.193744  674168 system_pods.go:89] "snapshot-controller-56fcc65765-pdqqq" [a8386b62-336b-4071-af36-a2737b7f6933] Running
	I0920 18:21:45.193749  674168 system_pods.go:89] "snapshot-controller-56fcc65765-qx6cd" [369755b7-0a45-437e-93c3-c52c7bc63bfd] Running
	I0920 18:21:45.193755  674168 system_pods.go:89] "storage-provisioner" [f20bb24d-0c61-4464-93b8-2f32abbe2465] Running
	I0920 18:21:45.193761  674168 system_pods.go:126] duration metric: took 8.110899ms to wait for k8s-apps to be running ...
	I0920 18:21:45.193769  674168 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:21:45.193838  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:21:45.204913  674168 system_svc.go:56] duration metric: took 11.134209ms WaitForService to wait for kubelet
	I0920 18:21:45.204952  674168 kubeadm.go:582] duration metric: took 2m25.657338244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:21:45.204980  674168 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:21:45.208110  674168 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 18:21:45.208138  674168 node_conditions.go:123] node cpu capacity is 8
	I0920 18:21:45.208151  674168 node_conditions.go:105] duration metric: took 3.164779ms to run NodePressure ...
	I0920 18:21:45.208162  674168 start.go:241] waiting for startup goroutines ...
	I0920 18:21:45.208172  674168 start.go:246] waiting for cluster config update ...
	I0920 18:21:45.208187  674168 start.go:255] writing updated cluster config ...
	I0920 18:21:45.208459  674168 ssh_runner.go:195] Run: rm -f paused
	I0920 18:21:45.256980  674168 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:21:45.259386  674168 out.go:177] * Done! kubectl is now configured to use "addons-162403" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.265826346Z" level=info msg="Stopped pod sandbox: 6d6823671fbf5b3c97f34185ba1699644ee30f5d998917b1efeaaafd6f87508c" id=e94bedda-6682-45be-ae27-af226a6d2192 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.637617225Z" level=info msg="Stopping pod sandbox: 0ba9d92a886666e277d1a77473280b858470edf50139ad59e4725b9410fde026" id=86d54b08-b23d-426d-92c0-d940d855114c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.637895739Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-7362d9da-c19d-46d1-ab52-e395c2ebef40 Namespace:local-path-storage ID:0ba9d92a886666e277d1a77473280b858470edf50139ad59e4725b9410fde026 UID:eb65929f-27e6-4012-a9f1-921e7fddf300 NetNS:/var/run/netns/1ec72951-c564-4216-b835-88ef618f08d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.638025166Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-7362d9da-c19d-46d1-ab52-e395c2ebef40 from CNI network \"kindnet\" (type=ptp)"
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.673021091Z" level=info msg="Stopped pod sandbox: 0ba9d92a886666e277d1a77473280b858470edf50139ad59e4725b9410fde026" id=86d54b08-b23d-426d-92c0-d940d855114c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.844915037Z" level=info msg="Stopping container: 9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c (timeout: 30s)" id=29a1b552-d7f3-4ee5-a45d-13390428b801 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.852829616Z" level=info msg="Stopping container: d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab (timeout: 30s)" id=d7a0da22-f8ca-42e1-8a68-95b1a0f240c6 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:30:58 addons-162403 conmon[3602]: conmon 9a925ce8e486b92b6147 <ninfo>: container 3614 exited with status 2
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.983168221Z" level=info msg="Stopped container 9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c: kube-system/registry-66c9cd494c-b4j85/registry" id=29a1b552-d7f3-4ee5-a45d-13390428b801 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.983741443Z" level=info msg="Stopping pod sandbox: 85a92a0cdec73c0a27917fe6236b4fabbf97c0ff38c1a1faae33280858276530" id=d445248a-e213-4803-9a1c-228ca13699b1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.983990476Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-b4j85 Namespace:kube-system ID:85a92a0cdec73c0a27917fe6236b4fabbf97c0ff38c1a1faae33280858276530 UID:88d02c55-38b5-4e2b-9986-5f7887226e63 NetNS:/var/run/netns/0fe4863b-d363-458f-b9a7-f27f7a5401e2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.984119591Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-b4j85 from CNI network \"kindnet\" (type=ptp)"
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.995695871Z" level=info msg="Stopped container d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab: kube-system/registry-proxy-x8xl5/registry-proxy" id=d7a0da22-f8ca-42e1-8a68-95b1a0f240c6 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.996210806Z" level=info msg="Stopping pod sandbox: 84e1a75570af9fa78d3504540cca639b8463a0b59560146e3f5128be842bacef" id=fb0f533f-97d4-4917-8714-a7739840fff4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:30:58 addons-162403 crio[1027]: time="2024-09-20 18:30:58.999501071Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-4SI7UYJFV3VI6VVJ - [0:0]\n:KUBE-HP-CES6DKWF2DTVLYWJ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-BGP27PFMIVWJTECO - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-wxtns_ingress-nginx_682363dd-9574-4aa6-b0df-2d77ce4696a9_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-BGP27PFMIVWJTECO\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-wxtns_ingress-nginx_682363dd-9574-4aa6-b0df-2d77ce4696a9_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-4SI7UYJFV3VI6VVJ\n-A KUBE-HP-4SI7UYJFV3VI6VVJ -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-wxtns_ingress-nginx_682363dd-9574-4aa6-b0df-2d77ce4696a9_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-4SI7UYJFV3VI6VVJ -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-wxtns_ingress-nginx_682363dd-9574-4aa6-b0
df-2d77ce4696a9_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.19:80\n-A KUBE-HP-BGP27PFMIVWJTECO -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-wxtns_ingress-nginx_682363dd-9574-4aa6-b0df-2d77ce4696a9_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-BGP27PFMIVWJTECO -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-wxtns_ingress-nginx_682363dd-9574-4aa6-b0df-2d77ce4696a9_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.19:443\n-X KUBE-HP-CES6DKWF2DTVLYWJ\nCOMMIT\n"
	Sep 20 18:30:59 addons-162403 crio[1027]: time="2024-09-20 18:30:59.003099511Z" level=info msg="Closing host port tcp:5000"
	Sep 20 18:30:59 addons-162403 crio[1027]: time="2024-09-20 18:30:59.004878643Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 20 18:30:59 addons-162403 crio[1027]: time="2024-09-20 18:30:59.005033946Z" level=info msg="Got pod network &{Name:registry-proxy-x8xl5 Namespace:kube-system ID:84e1a75570af9fa78d3504540cca639b8463a0b59560146e3f5128be842bacef UID:22fc174a-6a59-45df-b8e0-fd97f697901c NetNS:/var/run/netns/d37aa70d-347e-4c0c-8475-2791b22672d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 18:30:59 addons-162403 crio[1027]: time="2024-09-20 18:30:59.005164366Z" level=info msg="Deleting pod kube-system_registry-proxy-x8xl5 from CNI network \"kindnet\" (type=ptp)"
	Sep 20 18:30:59 addons-162403 crio[1027]: time="2024-09-20 18:30:59.020158499Z" level=info msg="Stopped pod sandbox: 85a92a0cdec73c0a27917fe6236b4fabbf97c0ff38c1a1faae33280858276530" id=d445248a-e213-4803-9a1c-228ca13699b1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:30:59 addons-162403 crio[1027]: time="2024-09-20 18:30:59.052498702Z" level=info msg="Stopped pod sandbox: 84e1a75570af9fa78d3504540cca639b8463a0b59560146e3f5128be842bacef" id=fb0f533f-97d4-4917-8714-a7739840fff4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:30:59 addons-162403 crio[1027]: time="2024-09-20 18:30:59.651413131Z" level=info msg="Removing container: 9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c" id=402e4e64-935f-41ed-a7ef-028b85e0b175 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 18:31:00 addons-162403 crio[1027]: time="2024-09-20 18:31:00.156414820Z" level=info msg="Removed container 9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c: kube-system/registry-66c9cd494c-b4j85/registry" id=402e4e64-935f-41ed-a7ef-028b85e0b175 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 18:31:00 addons-162403 crio[1027]: time="2024-09-20 18:31:00.167188428Z" level=info msg="Removing container: d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab" id=a1ed3052-82eb-4ee1-bd52-60111fdb2ffd name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 18:31:00 addons-162403 crio[1027]: time="2024-09-20 18:31:00.184163663Z" level=info msg="Removed container d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab: kube-system/registry-proxy-x8xl5/registry-proxy" id=a1ed3052-82eb-4ee1-bd52-60111fdb2ffd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f0ceefe2e514       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             4 seconds ago       Exited              helper-pod                0                   0ba9d92a88666       helper-pod-delete-pvc-7362d9da-c19d-46d1-ab52-e395c2ebef40
	c25d2267ccfdd       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              44 seconds ago      Running             nginx                     0                   6293a840e0f65       nginx
	167a7699d2ad7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                  0                   cff5d95699f6e       gcp-auth-89d5ffd79-742xn
	4e295275fa2d8       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                0                   09e264dfd2260       ingress-nginx-controller-bc57996ff-wxtns
	5e23c290c5292       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago      Exited              patch                     0                   13a96ce846052       ingress-nginx-admission-patch-8jqwt
	5d495cc4d007d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             10 minutes ago      Running             local-path-provisioner    0                   2a8ec889be7b5       local-path-provisioner-86d989889c-v5k84
	acca616b5cd64       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server            0                   6a8890c1b1e3b       metrics-server-84c5f94fbc-gr2ct
	2cb6bb0b06bb3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago      Exited              create                    0                   fe30db5645655       ingress-nginx-admission-create-ct9rs
	bffafc70e2bfb       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   677f6adbe7c56       kube-ingress-dns-minikube
	cdb59912f2e14       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   10529a41c309c       coredns-7c65d6cfc9-24mgs
	525f045aa748e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   612ce81908c78       storage-provisioner
	0a3bc23a91121       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                             11 minutes ago      Running             kindnet-cni               0                   7f6e1d53fda98       kindnet-j7fr4
	52c52923ef8ea       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             11 minutes ago      Running             kube-proxy                0                   ae303bad1ebff       kube-proxy-dd8cb
	4b71192f65f2d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             11 minutes ago      Running             kube-controller-manager   0                   2a41178034cbc       kube-controller-manager-addons-162403
	249ac20417667       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             11 minutes ago      Running             kube-scheduler            0                   e48f7866753bd       kube-scheduler-addons-162403
	c4ad43014a83b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                      0                   bdef69edf9acd       etcd-addons-162403
	f38c04f167d00       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             11 minutes ago      Running             kube-apiserver            0                   c3f039afa24e9       kube-apiserver-addons-162403
	
	
	==> coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] <==
	[INFO] 10.244.0.18:52396 - 19852 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128047s
	[INFO] 10.244.0.18:44347 - 60145 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068286s
	[INFO] 10.244.0.18:44347 - 17143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094705s
	[INFO] 10.244.0.18:46410 - 18873 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005052499s
	[INFO] 10.244.0.18:46410 - 26037 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.007222025s
	[INFO] 10.244.0.18:34432 - 34096 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00440361s
	[INFO] 10.244.0.18:34432 - 33069 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006320363s
	[INFO] 10.244.0.18:48014 - 36175 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004973376s
	[INFO] 10.244.0.18:48014 - 51266 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006232888s
	[INFO] 10.244.0.18:55384 - 9190 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082322s
	[INFO] 10.244.0.18:55384 - 6628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000129501s
	[INFO] 10.244.0.20:48448 - 47225 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000223503s
	[INFO] 10.244.0.20:55693 - 31699 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271037s
	[INFO] 10.244.0.20:57762 - 4868 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147825s
	[INFO] 10.244.0.20:41977 - 42962 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138482s
	[INFO] 10.244.0.20:35780 - 25623 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090618s
	[INFO] 10.244.0.20:35231 - 28557 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160324s
	[INFO] 10.244.0.20:37823 - 1338 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005223073s
	[INFO] 10.244.0.20:35707 - 7420 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00534569s
	[INFO] 10.244.0.20:59126 - 24034 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005715821s
	[INFO] 10.244.0.20:41947 - 25595 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006074152s
	[INFO] 10.244.0.20:60551 - 48110 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004720674s
	[INFO] 10.244.0.20:47355 - 8992 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005126856s
	[INFO] 10.244.0.20:41941 - 3315 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002301451s
	[INFO] 10.244.0.20:35273 - 35195 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.002359301s
	
	
	==> describe nodes <==
	Name:               addons-162403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-162403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-162403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_19_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-162403
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:19:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-162403
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:30:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:30:48 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:30:48 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:30:48 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:30:48 +0000   Fri, 20 Sep 2024 18:20:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-162403
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 84fc0251f2cc47d9b8eafd449e71e23a
	  System UUID:                a1b78626-3ab2-4437-8dfa-b9488af04241
	  Boot ID:                    1090cbe7-7e52-40cc-b00d-227cb699fd1e
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  gcp-auth                    gcp-auth-89d5ffd79-742xn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-7b5c95b59d-jz68p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-wxtns    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-24mgs                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-addons-162403                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-j7fr4                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-162403                250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-162403       200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-dd8cb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-162403                100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-gr2ct             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-v5k84     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-162403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-162403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-162403 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-162403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-162403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-162403 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-162403 event: Registered Node addons-162403 in Controller
	  Normal   NodeReady                10m                kubelet          Node addons-162403 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 27 ff 4b df 20 08 06
	[  +0.082755] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 16 6d e1 19 46 08 06
	[  +6.907947] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 0c fb 31 c2 61 08 06
	[ +27.701132] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 51 e2 82 fa 23 08 06
	[  +0.958821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 b8 e7 f5 d7 b1 08 06
	[  +0.036400] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a e8 33 86 c0 c3 08 06
	[Sep20 18:07] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 77 f7 48 11 3e 08 06
	[Sep20 18:30] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +1.015314] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +2.011792] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +4.255527] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +8.195086] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[ +16.122214] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	
	
	==> etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] <==
	{"level":"info","ts":"2024-09-20T18:19:21.249218Z","caller":"traceutil/trace.go:171","msg":"trace[1967390529] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"203.55059ms","start":"2024-09-20T18:19:21.045640Z","end":"2024-09-20T18:19:21.249190Z","steps":["trace[1967390529] 'process raft request'  (duration: 16.993754ms)","trace[1967390529] 'compare'  (duration: 90.373951ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:19:21.551948Z","caller":"traceutil/trace.go:171","msg":"trace[804211122] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"107.356574ms","start":"2024-09-20T18:19:21.444570Z","end":"2024-09-20T18:19:21.551927Z","steps":["trace[804211122] 'process raft request'  (duration: 100.501186ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.552185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.503897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-20T18:19:21.552216Z","caller":"traceutil/trace.go:171","msg":"trace[1547814754] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:394; }","duration":"105.546848ms","start":"2024-09-20T18:19:21.446660Z","end":"2024-09-20T18:19:21.552207Z","steps":["trace[1547814754] 'agreement among raft nodes before linearized reading'  (duration: 105.471396ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.552378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.065959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-20T18:19:21.554478Z","caller":"traceutil/trace.go:171","msg":"trace[813399487] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:394; }","duration":"110.162094ms","start":"2024-09-20T18:19:21.444302Z","end":"2024-09-20T18:19:21.554464Z","steps":["trace[813399487] 'agreement among raft nodes before linearized reading'  (duration: 108.03856ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.961265Z","caller":"traceutil/trace.go:171","msg":"trace[125269741] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:411; }","duration":"106.622713ms","start":"2024-09-20T18:19:21.854600Z","end":"2024-09-20T18:19:21.961223Z","steps":["trace[125269741] 'read index received'  (duration: 10.382865ms)","trace[125269741] 'applied index is now lower than readState.Index'  (duration: 96.239015ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:19:21.962257Z","caller":"traceutil/trace.go:171","msg":"trace[622144423] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"105.453568ms","start":"2024-09-20T18:19:21.856784Z","end":"2024-09-20T18:19:21.962238Z","steps":["trace[622144423] 'process raft request'  (duration: 104.328353ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.962484Z","caller":"traceutil/trace.go:171","msg":"trace[1676311620] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"107.945568ms","start":"2024-09-20T18:19:21.854521Z","end":"2024-09-20T18:19:21.962467Z","steps":["trace[1676311620] 'process raft request'  (duration: 106.387415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.963144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.476893ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:19:21.964893Z","caller":"traceutil/trace.go:171","msg":"trace[911468214] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"110.230982ms","start":"2024-09-20T18:19:21.854646Z","end":"2024-09-20T18:19:21.964877Z","steps":["trace[911468214] 'agreement among raft nodes before linearized reading'  (duration: 108.318206ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.963644Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.037025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-66scz\" ","response":"range_response_count:1 size:3993"}
	{"level":"info","ts":"2024-09-20T18:19:21.961930Z","caller":"traceutil/trace.go:171","msg":"trace[1847307825] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"107.308099ms","start":"2024-09-20T18:19:21.854610Z","end":"2024-09-20T18:19:21.961918Z","steps":["trace[1847307825] 'process raft request'  (duration: 106.453351ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.965508Z","caller":"traceutil/trace.go:171","msg":"trace[534112644] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-66scz; range_end:; response_count:1; response_revision:409; }","duration":"110.908607ms","start":"2024-09-20T18:19:21.854588Z","end":"2024-09-20T18:19:21.965497Z","steps":["trace[534112644] 'agreement among raft nodes before linearized reading'  (duration: 109.014234ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:22.259610Z","caller":"traceutil/trace.go:171","msg":"trace[515300591] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"197.066865ms","start":"2024-09-20T18:19:22.062522Z","end":"2024-09-20T18:19:22.259589Z","steps":["trace[515300591] 'process raft request'  (duration: 97.886637ms)","trace[515300591] 'compare'  (duration: 98.78174ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:19:22.259756Z","caller":"traceutil/trace.go:171","msg":"trace[414203013] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"196.971979ms","start":"2024-09-20T18:19:22.062775Z","end":"2024-09-20T18:19:22.259747Z","steps":["trace[414203013] 'read index received'  (duration: 84.675819ms)","trace[414203013] 'applied index is now lower than readState.Index'  (duration: 112.295168ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:19:22.259853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.062034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:19:22.259884Z","caller":"traceutil/trace.go:171","msg":"trace[1096776429] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:413; }","duration":"197.105208ms","start":"2024-09-20T18:19:22.062771Z","end":"2024-09-20T18:19:22.259876Z","steps":["trace[1096776429] 'agreement among raft nodes before linearized reading'  (duration: 197.01208ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:22.260069Z","caller":"traceutil/trace.go:171","msg":"trace[1236995037] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"108.527632ms","start":"2024-09-20T18:19:22.151533Z","end":"2024-09-20T18:19:22.260061Z","steps":["trace[1236995037] 'process raft request'  (duration: 107.765823ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:23.355227Z","caller":"traceutil/trace.go:171","msg":"trace[850183716] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"102.197243ms","start":"2024-09-20T18:19:23.253005Z","end":"2024-09-20T18:19:23.355202Z","steps":[],"step_count":0}
	{"level":"info","ts":"2024-09-20T18:19:23.355525Z","caller":"traceutil/trace.go:171","msg":"trace[673964805] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"102.739687ms","start":"2024-09-20T18:19:23.252776Z","end":"2024-09-20T18:19:23.355515Z","steps":[],"step_count":0}
	{"level":"info","ts":"2024-09-20T18:20:56.075302Z","caller":"traceutil/trace.go:171","msg":"trace[1129311574] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"107.719546ms","start":"2024-09-20T18:20:55.967566Z","end":"2024-09-20T18:20:56.075286Z","steps":["trace[1129311574] 'process raft request'  (duration: 107.623877ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:29:10.697461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1534}
	{"level":"info","ts":"2024-09-20T18:29:10.721647Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1534,"took":"23.706571ms","hash":2292866617,"current-db-size-bytes":6184960,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3268608,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-20T18:29:10.721703Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2292866617,"revision":1534,"compact-revision":-1}
	
	
	==> gcp-auth [167a7699d2ad79a24795ae8d77140ef7ac5625e2824cc3968217f95fcb44cb62] <==
	2024/09/20 18:21:45 Ready to write response ...
	2024/09/20 18:21:45 Ready to marshal response ...
	2024/09/20 18:21:45 Ready to write response ...
	2024/09/20 18:21:45 Ready to marshal response ...
	2024/09/20 18:21:45 Ready to write response ...
	2024/09/20 18:29:56 Ready to marshal response ...
	2024/09/20 18:29:56 Ready to write response ...
	2024/09/20 18:29:58 Ready to marshal response ...
	2024/09/20 18:29:58 Ready to write response ...
	2024/09/20 18:30:12 Ready to marshal response ...
	2024/09/20 18:30:12 Ready to write response ...
	2024/09/20 18:30:22 Ready to marshal response ...
	2024/09/20 18:30:22 Ready to write response ...
	2024/09/20 18:30:44 Ready to marshal response ...
	2024/09/20 18:30:44 Ready to write response ...
	2024/09/20 18:30:44 Ready to marshal response ...
	2024/09/20 18:30:44 Ready to write response ...
	2024/09/20 18:30:56 Ready to marshal response ...
	2024/09/20 18:30:56 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	
	
	==> kernel <==
	 18:31:01 up  2:13,  0 users,  load average: 1.50, 0.64, 1.02
	Linux addons-162403 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] <==
	I0920 18:28:53.647111       1 main.go:299] handling current node
	I0920 18:29:03.649358       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:29:03.649399       1 main.go:299] handling current node
	I0920 18:29:13.644535       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:29:13.644577       1 main.go:299] handling current node
	I0920 18:29:23.644427       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:29:23.644598       1 main.go:299] handling current node
	I0920 18:29:33.651807       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:29:33.651843       1 main.go:299] handling current node
	I0920 18:29:43.644687       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:29:43.644730       1 main.go:299] handling current node
	I0920 18:29:53.644997       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:29:53.645055       1 main.go:299] handling current node
	I0920 18:30:03.644816       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:30:03.644862       1 main.go:299] handling current node
	I0920 18:30:13.644183       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:30:13.644231       1 main.go:299] handling current node
	I0920 18:30:23.644523       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:30:23.644553       1 main.go:299] handling current node
	I0920 18:30:33.644213       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:30:33.644258       1 main.go:299] handling current node
	I0920 18:30:43.647104       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:30:43.647138       1 main.go:299] handling current node
	I0920 18:30:53.644203       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:30:53.644254       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] <==
	E0920 18:21:34.560201       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 18:21:34.561794       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.35.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.35.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.35.12:443: connect: connection refused" logger="UnhandledError"
	I0920 18:21:34.599060       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 18:30:06.424489       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:30:07.441395       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:30:09.437271       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 18:30:11.895465       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:30:12.151933       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.87.74"}
	I0920 18:30:37.871661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.871723       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.887033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.887175       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.892360       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.892420       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.898321       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.898486       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.949118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.949160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 18:30:38.893058       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 18:30:38.950022       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 18:30:38.957732       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 18:30:57.308382       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.229.203"}
	
	
	==> kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] <==
	I0920 18:30:44.532131       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="5.45µs"
	W0920 18:30:45.433875       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:30:45.433932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:30:46.280127       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:30:46.280169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:30:46.844482       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:30:46.844532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:30:48.580553       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:30:48.580595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:30:48.651250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-162403"
	I0920 18:30:49.133857       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0920 18:30:49.133895       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 18:30:49.543310       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 18:30:49.543351       1 shared_informer.go:320] Caches are synced for garbage collector
	W0920 18:30:55.978056       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:30:55.978106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:30:56.910430       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:30:56.910472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:30:57.349398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="27.268831ms"
	I0920 18:30:57.354514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="5.069307ms"
	I0920 18:30:57.354590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="41.808µs"
	I0920 18:30:57.359040       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="115.472µs"
	W0920 18:30:57.503367       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:30:57.503412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:30:58.835370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.498µs"
	
	
	==> kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] <==
	I0920 18:19:23.348890       1 server_linux.go:66] "Using iptables proxy"
	I0920 18:19:24.053513       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 18:19:24.053686       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:19:24.461765       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 18:19:24.461911       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:19:24.544987       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:19:24.545696       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:19:24.545778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:19:24.547965       1 config.go:199] "Starting service config controller"
	I0920 18:19:24.549760       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:19:24.549176       1 config.go:328] "Starting node config controller"
	I0920 18:19:24.549328       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:19:24.549802       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:19:24.549809       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:19:24.651660       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:19:24.651660       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:19:24.651702       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] <==
	W0920 18:19:12.051578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0920 18:19:12.051599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:19:12.051631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:19:12.051630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.052648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:19:12.052678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.052829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:19:12.052843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.855801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:12.855855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.864661       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:19:12.864714       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:19:12.882432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:19:12.882477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.910952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:12.911024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.925403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:19:12.925449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.010499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:19:13.010542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.081617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:13.081680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.166464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:13.166510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 18:19:15.650022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:30:58 addons-162403 kubelet[1624]: I0920 18:30:58.730745    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb65929f-27e6-4012-a9f1-921e7fddf300-script" (OuterVolumeSpecName: "script") pod "eb65929f-27e6-4012-a9f1-921e7fddf300" (UID: "eb65929f-27e6-4012-a9f1-921e7fddf300"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 20 18:30:58 addons-162403 kubelet[1624]: I0920 18:30:58.732313    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb65929f-27e6-4012-a9f1-921e7fddf300-kube-api-access-htwtr" (OuterVolumeSpecName: "kube-api-access-htwtr") pod "eb65929f-27e6-4012-a9f1-921e7fddf300" (UID: "eb65929f-27e6-4012-a9f1-921e7fddf300"). InnerVolumeSpecName "kube-api-access-htwtr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:30:58 addons-162403 kubelet[1624]: I0920 18:30:58.831349    1624 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/eb65929f-27e6-4012-a9f1-921e7fddf300-script\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:30:58 addons-162403 kubelet[1624]: I0920 18:30:58.831395    1624 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/eb65929f-27e6-4012-a9f1-921e7fddf300-gcp-creds\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:30:58 addons-162403 kubelet[1624]: I0920 18:30:58.831409    1624 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-htwtr\" (UniqueName: \"kubernetes.io/projected/eb65929f-27e6-4012-a9f1-921e7fddf300-kube-api-access-htwtr\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:30:58 addons-162403 kubelet[1624]: I0920 18:30:58.831422    1624 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/eb65929f-27e6-4012-a9f1-921e7fddf300-data\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:30:59 addons-162403 kubelet[1624]: I0920 18:30:59.133439    1624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9wsg\" (UniqueName: \"kubernetes.io/projected/88d02c55-38b5-4e2b-9986-5f7887226e63-kube-api-access-h9wsg\") pod \"88d02c55-38b5-4e2b-9986-5f7887226e63\" (UID: \"88d02c55-38b5-4e2b-9986-5f7887226e63\") "
	Sep 20 18:30:59 addons-162403 kubelet[1624]: I0920 18:30:59.133490    1624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldcfx\" (UniqueName: \"kubernetes.io/projected/22fc174a-6a59-45df-b8e0-fd97f697901c-kube-api-access-ldcfx\") pod \"22fc174a-6a59-45df-b8e0-fd97f697901c\" (UID: \"22fc174a-6a59-45df-b8e0-fd97f697901c\") "
	Sep 20 18:30:59 addons-162403 kubelet[1624]: I0920 18:30:59.135857    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fc174a-6a59-45df-b8e0-fd97f697901c-kube-api-access-ldcfx" (OuterVolumeSpecName: "kube-api-access-ldcfx") pod "22fc174a-6a59-45df-b8e0-fd97f697901c" (UID: "22fc174a-6a59-45df-b8e0-fd97f697901c"). InnerVolumeSpecName "kube-api-access-ldcfx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:30:59 addons-162403 kubelet[1624]: I0920 18:30:59.135896    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d02c55-38b5-4e2b-9986-5f7887226e63-kube-api-access-h9wsg" (OuterVolumeSpecName: "kube-api-access-h9wsg") pod "88d02c55-38b5-4e2b-9986-5f7887226e63" (UID: "88d02c55-38b5-4e2b-9986-5f7887226e63"). InnerVolumeSpecName "kube-api-access-h9wsg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:30:59 addons-162403 kubelet[1624]: I0920 18:30:59.234367    1624 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h9wsg\" (UniqueName: \"kubernetes.io/projected/88d02c55-38b5-4e2b-9986-5f7887226e63-kube-api-access-h9wsg\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:30:59 addons-162403 kubelet[1624]: I0920 18:30:59.234414    1624 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ldcfx\" (UniqueName: \"kubernetes.io/projected/22fc174a-6a59-45df-b8e0-fd97f697901c-kube-api-access-ldcfx\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:30:59 addons-162403 kubelet[1624]: I0920 18:30:59.648351    1624 scope.go:117] "RemoveContainer" containerID="9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c"
	Sep 20 18:30:59 addons-162403 kubelet[1624]: I0920 18:30:59.654466    1624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba9d92a886666e277d1a77473280b858470edf50139ad59e4725b9410fde026"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.156711    1624 scope.go:117] "RemoveContainer" containerID="9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: E0920 18:31:00.159450    1624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c\": container with ID starting with 9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c not found: ID does not exist" containerID="9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.159513    1624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c"} err="failed to get container status \"9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c\": rpc error: code = NotFound desc = could not find container \"9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c\": container with ID starting with 9a925ce8e486b92b61470f49566a300f40c75d174636e393e75f8ce8457c995c not found: ID does not exist"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.159551    1624 scope.go:117] "RemoveContainer" containerID="d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.243368    1624 scope.go:117] "RemoveContainer" containerID="d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: E0920 18:31:00.243914    1624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab\": container with ID starting with d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab not found: ID does not exist" containerID="d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.243964    1624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab"} err="failed to get container status \"d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab\": rpc error: code = NotFound desc = could not find container \"d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab\": container with ID starting with d21fcca2d833bfd9ad19b75aa19b9517b2e11e7fcb5f84ad75e68fe174a3c9ab not found: ID does not exist"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.447209    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2a058a-de1d-4ca8-9b9b-ec1797eb38fd" path="/var/lib/kubelet/pods/0a2a058a-de1d-4ca8-9b9b-ec1797eb38fd/volumes"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.447689    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fc174a-6a59-45df-b8e0-fd97f697901c" path="/var/lib/kubelet/pods/22fc174a-6a59-45df-b8e0-fd97f697901c/volumes"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.448162    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d02c55-38b5-4e2b-9986-5f7887226e63" path="/var/lib/kubelet/pods/88d02c55-38b5-4e2b-9986-5f7887226e63/volumes"
	Sep 20 18:31:00 addons-162403 kubelet[1624]: I0920 18:31:00.448623    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb65929f-27e6-4012-a9f1-921e7fddf300" path="/var/lib/kubelet/pods/eb65929f-27e6-4012-a9f1-921e7fddf300/volumes"
	
	
	==> storage-provisioner [525f045aa748e6ea6058a19f28604c5472b307505ab4e997fc5024dd5e9d9ef2] <==
	I0920 18:20:05.073668       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:20:05.085517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:20:05.085586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:20:05.094317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:20:05.094479       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06!
	I0920 18:20:05.094902       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a6c3edb-f643-4302-b044-b3279df05602", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06 became leader
	I0920 18:20:05.195504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-162403 -n addons-162403
helpers_test.go:261: (dbg) Run:  kubectl --context addons-162403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox headlamp-7b5c95b59d-jz68p ingress-nginx-admission-create-ct9rs ingress-nginx-admission-patch-8jqwt
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-162403 describe pod busybox headlamp-7b5c95b59d-jz68p ingress-nginx-admission-create-ct9rs ingress-nginx-admission-patch-8jqwt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-162403 describe pod busybox headlamp-7b5c95b59d-jz68p ingress-nginx-admission-create-ct9rs ingress-nginx-admission-patch-8jqwt: exit status 1 (72.600139ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-162403/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 18:21:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4hs2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p4hs2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-162403
	  Normal   Pulling    7m47s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m11s (x20 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-7b5c95b59d-jz68p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-ct9rs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8jqwt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-162403 describe pod busybox headlamp-7b5c95b59d-jz68p ingress-nginx-admission-create-ct9rs ingress-nginx-admission-patch-8jqwt: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-162403 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-162403 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-162403 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [11c4070e-df4f-4de4-bcba-f67d382304db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [11c4070e-df4f-4de4-bcba-f67d382304db] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004053557s
I0920 18:30:23.164489  672823 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-162403 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.415530918s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-162403 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-162403 addons disable ingress --alsologtostderr -v=1: (7.636559015s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-162403
helpers_test.go:235: (dbg) docker inspect addons-162403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7",
	        "Created": "2024-09-20T18:19:01.134918747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 674901,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T18:19:01.25004308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/hosts",
	        "LogPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7-json.log",
	        "Name": "/addons-162403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-162403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-162403",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3-init/diff:/var/lib/docker/overlay2/eaa029c0352c09d5301213b292ed71be17ad3c7af9b304910b3afcbb6087e2a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-162403",
	                "Source": "/var/lib/docker/volumes/addons-162403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-162403",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-162403",
	                "name.minikube.sigs.k8s.io": "addons-162403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "488a22f7f2606afe4be623bfdfd275b5b8331f1b931576ea9ec822158b58c0ce",
	            "SandboxKey": "/var/run/docker/netns/488a22f7f260",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-162403": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d0901782c3c8698a9caccb5c84dc1c7ad2c5eb6d0b068119a7aad73f3dbaa435",
	                    "EndpointID": "035274aa910e41c214a6f521c4fc53fb707a6152897b47a404b57c9e4e462cf6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-162403",
	                        "106a9fd3effc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-162403 -n addons-162403
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-162403 logs -n 25: (1.194794865s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-536443                                                                     | download-only-536443   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| delete  | -p download-only-183655                                                                     | download-only-183655   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| start   | --download-only -p                                                                          | download-docker-729301 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | download-docker-729301                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-729301                                                                   | download-docker-729301 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-249385   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | binary-mirror-249385                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43551                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-249385                                                                     | binary-mirror-249385   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| addons  | enable dashboard -p                                                                         | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-162403 --wait=true                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC | 20 Sep 24 18:29 UTC |
	|         | -p addons-162403                                                                            |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC | 20 Sep 24 18:29 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-162403 ssh curl -s                                                                   | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-162403 addons                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-162403 addons                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-162403 ssh cat                                                                       | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | /opt/local-path-provisioner/pvc-7362d9da-c19d-46d1-ab52-e395c2ebef40_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | -p addons-162403                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-162403 ip                                                                            | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:31 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-162403 ip                                                                            | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:32 UTC | 20 Sep 24 18:32 UTC |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:32 UTC | 20 Sep 24 18:32 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:32 UTC | 20 Sep 24 18:32 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:18:38
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:18:38.955255  674168 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:18:38.955393  674168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:38.955405  674168 out.go:358] Setting ErrFile to fd 2...
	I0920 18:18:38.955420  674168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:38.955592  674168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:18:38.956218  674168 out.go:352] Setting JSON to false
	I0920 18:18:38.957151  674168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7263,"bootTime":1726849056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:18:38.957258  674168 start.go:139] virtualization: kvm guest
	I0920 18:18:38.959268  674168 out.go:177] * [addons-162403] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:18:38.960748  674168 notify.go:220] Checking for updates...
	I0920 18:18:38.960767  674168 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:18:38.962055  674168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:18:38.963377  674168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:18:38.964538  674168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	I0920 18:18:38.965672  674168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:18:38.966885  674168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:18:38.968185  674168 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:18:38.989387  674168 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:18:38.989471  674168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:39.033969  674168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:18:39.025186058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:39.034102  674168 docker.go:318] overlay module found
	I0920 18:18:39.035798  674168 out.go:177] * Using the docker driver based on user configuration
	I0920 18:18:39.037025  674168 start.go:297] selected driver: docker
	I0920 18:18:39.037039  674168 start.go:901] validating driver "docker" against <nil>
	I0920 18:18:39.037051  674168 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:18:39.037947  674168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:39.085086  674168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:18:39.076841302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:39.085255  674168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:18:39.085496  674168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:18:39.087167  674168 out.go:177] * Using Docker driver with root privileges
	I0920 18:18:39.088532  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:18:39.088595  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:18:39.088606  674168 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:18:39.088665  674168 start.go:340] cluster config:
	{Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:39.089923  674168 out.go:177] * Starting "addons-162403" primary control-plane node in "addons-162403" cluster
	I0920 18:18:39.091072  674168 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:18:39.092598  674168 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:18:39.094070  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:39.094104  674168 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:18:39.094121  674168 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:18:39.094133  674168 cache.go:56] Caching tarball of preloaded images
	I0920 18:18:39.094252  674168 preload.go:172] Found /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:18:39.094263  674168 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:18:39.094613  674168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json ...
	I0920 18:18:39.094639  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json: {Name:mka678336c738f0ad3cca0a057f366143df6dca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:39.109272  674168 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:18:39.109425  674168 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:18:39.109447  674168 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:18:39.109453  674168 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:18:39.109467  674168 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:18:39.109477  674168 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 18:18:51.189040  674168 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 18:18:51.189079  674168 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:18:51.189135  674168 start.go:360] acquireMachinesLock for addons-162403: {Name:mk331c03eda7bf008a5f6618682622fc66137de8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:18:51.189234  674168 start.go:364] duration metric: took 78.073µs to acquireMachinesLock for "addons-162403"
	I0920 18:18:51.189258  674168 start.go:93] Provisioning new machine with config: &{Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:51.189337  674168 start.go:125] createHost starting for "" (driver="docker")
	I0920 18:18:51.191508  674168 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 18:18:51.191775  674168 start.go:159] libmachine.API.Create for "addons-162403" (driver="docker")
	I0920 18:18:51.191808  674168 client.go:168] LocalClient.Create starting
	I0920 18:18:51.191901  674168 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem
	I0920 18:18:51.507907  674168 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem
	I0920 18:18:51.677159  674168 cli_runner.go:164] Run: docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 18:18:51.691915  674168 cli_runner.go:211] docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 18:18:51.692010  674168 network_create.go:284] running [docker network inspect addons-162403] to gather additional debugging logs...
	I0920 18:18:51.692035  674168 cli_runner.go:164] Run: docker network inspect addons-162403
	W0920 18:18:51.707711  674168 cli_runner.go:211] docker network inspect addons-162403 returned with exit code 1
	I0920 18:18:51.707746  674168 network_create.go:287] error running [docker network inspect addons-162403]: docker network inspect addons-162403: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-162403 not found
	I0920 18:18:51.707769  674168 network_create.go:289] output of [docker network inspect addons-162403]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-162403 not found
	
	** /stderr **
	I0920 18:18:51.707870  674168 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:18:51.723682  674168 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a5f410}
	I0920 18:18:51.723727  674168 network_create.go:124] attempt to create docker network addons-162403 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 18:18:51.723786  674168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-162403 addons-162403
	I0920 18:18:51.787135  674168 network_create.go:108] docker network addons-162403 192.168.49.0/24 created
	I0920 18:18:51.787171  674168 kic.go:121] calculated static IP "192.168.49.2" for the "addons-162403" container
	I0920 18:18:51.787234  674168 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 18:18:51.802456  674168 cli_runner.go:164] Run: docker volume create addons-162403 --label name.minikube.sigs.k8s.io=addons-162403 --label created_by.minikube.sigs.k8s.io=true
	I0920 18:18:51.819456  674168 oci.go:103] Successfully created a docker volume addons-162403
	I0920 18:18:51.819546  674168 cli_runner.go:164] Run: docker run --rm --name addons-162403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --entrypoint /usr/bin/test -v addons-162403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 18:18:56.747820  674168 cli_runner.go:217] Completed: docker run --rm --name addons-162403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --entrypoint /usr/bin/test -v addons-162403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (4.92822817s)
	I0920 18:18:56.747853  674168 oci.go:107] Successfully prepared a docker volume addons-162403
	I0920 18:18:56.747870  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:56.747891  674168 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 18:18:56.747948  674168 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162403:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 18:19:01.072064  674168 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162403:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.324069588s)
	I0920 18:19:01.072104  674168 kic.go:203] duration metric: took 4.324208181s to extract preloaded images to volume ...
	W0920 18:19:01.072245  674168 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 18:19:01.072342  674168 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 18:19:01.120121  674168 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-162403 --name addons-162403 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-162403 --network addons-162403 --ip 192.168.49.2 --volume addons-162403:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 18:19:01.433919  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Running}}
	I0920 18:19:01.451773  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.468968  674168 cli_runner.go:164] Run: docker exec addons-162403 stat /var/lib/dpkg/alternatives/iptables
	I0920 18:19:01.510599  674168 oci.go:144] the created container "addons-162403" has a running status.
	I0920 18:19:01.510643  674168 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa...
	I0920 18:19:01.839171  674168 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 18:19:01.868842  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.888555  674168 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 18:19:01.888581  674168 kic_runner.go:114] Args: [docker exec --privileged addons-162403 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 18:19:01.951628  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.969485  674168 machine.go:93] provisionDockerMachine start ...
	I0920 18:19:01.969572  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:01.988650  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:01.988870  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:01.988884  674168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:19:02.122640  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162403
	
	I0920 18:19:02.122671  674168 ubuntu.go:169] provisioning hostname "addons-162403"
	I0920 18:19:02.122731  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.140337  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.140537  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.140557  674168 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-162403 && echo "addons-162403" | sudo tee /etc/hostname
	I0920 18:19:02.286561  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162403
	
	I0920 18:19:02.286650  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.304306  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.304516  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.304533  674168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-162403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-162403/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-162403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:19:02.439353  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:19:02.439404  674168 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-664237/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-664237/.minikube}
	I0920 18:19:02.439441  674168 ubuntu.go:177] setting up certificates
	I0920 18:19:02.439455  674168 provision.go:84] configureAuth start
	I0920 18:19:02.439504  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:02.456858  674168 provision.go:143] copyHostCerts
	I0920 18:19:02.456941  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/ca.pem (1078 bytes)
	I0920 18:19:02.457067  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/cert.pem (1123 bytes)
	I0920 18:19:02.457128  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/key.pem (1679 bytes)
	I0920 18:19:02.457180  674168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem org=jenkins.addons-162403 san=[127.0.0.1 192.168.49.2 addons-162403 localhost minikube]
	I0920 18:19:02.568617  674168 provision.go:177] copyRemoteCerts
	I0920 18:19:02.568695  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:19:02.568736  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.586920  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:02.684045  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:19:02.707472  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:19:02.731956  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:19:02.755601  674168 provision.go:87] duration metric: took 316.131194ms to configureAuth
	I0920 18:19:02.755631  674168 ubuntu.go:193] setting minikube options for container-runtime
	I0920 18:19:02.755814  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:02.755914  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.772731  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.772918  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.772936  674168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:19:02.992259  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:19:02.992298  674168 machine.go:96] duration metric: took 1.022790809s to provisionDockerMachine
	I0920 18:19:02.992310  674168 client.go:171] duration metric: took 11.800496863s to LocalClient.Create
	I0920 18:19:02.992331  674168 start.go:167] duration metric: took 11.800557763s to libmachine.API.Create "addons-162403"
	I0920 18:19:02.992341  674168 start.go:293] postStartSetup for "addons-162403" (driver="docker")
	I0920 18:19:02.992353  674168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:19:02.992454  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:19:02.992503  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.008771  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.104327  674168 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:19:03.107709  674168 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 18:19:03.107745  674168 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 18:19:03.107753  674168 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 18:19:03.107760  674168 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 18:19:03.107771  674168 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-664237/.minikube/addons for local assets ...
	I0920 18:19:03.107836  674168 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-664237/.minikube/files for local assets ...
	I0920 18:19:03.107861  674168 start.go:296] duration metric: took 115.514633ms for postStartSetup
	I0920 18:19:03.108152  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:03.124456  674168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json ...
	I0920 18:19:03.124718  674168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:19:03.124760  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.141718  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.231925  674168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 18:19:03.236351  674168 start.go:128] duration metric: took 12.046994202s to createHost
	I0920 18:19:03.236388  674168 start.go:83] releasing machines lock for "addons-162403", held for 12.047138719s
	I0920 18:19:03.236447  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:03.252823  674168 ssh_runner.go:195] Run: cat /version.json
	I0920 18:19:03.252881  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.252896  674168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:19:03.252965  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.270590  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.270812  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.431267  674168 ssh_runner.go:195] Run: systemctl --version
	I0920 18:19:03.435427  674168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:19:03.571297  674168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:19:03.575824  674168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:19:03.593925  674168 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 18:19:03.594008  674168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:19:03.621210  674168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 18:19:03.621241  674168 start.go:495] detecting cgroup driver to use...
	I0920 18:19:03.621281  674168 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 18:19:03.621346  674168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:19:03.636176  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:19:03.646720  674168 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:19:03.646780  674168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:19:03.659269  674168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:19:03.672678  674168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:19:03.753551  674168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:19:03.832924  674168 docker.go:233] disabling docker service ...
	I0920 18:19:03.833033  674168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:19:03.850932  674168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:19:03.861851  674168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:19:03.936436  674168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:19:04.025605  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:19:04.037271  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:19:04.053234  674168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:19:04.053306  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.062992  674168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:19:04.063067  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.073077  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.082949  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.093166  674168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:19:04.102194  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.111782  674168 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.127237  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.137185  674168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:19:04.145365  674168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:19:04.153756  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:04.227978  674168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:19:04.324503  674168 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:19:04.324605  674168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:19:04.328475  674168 start.go:563] Will wait 60s for crictl version
	I0920 18:19:04.328524  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:19:04.331866  674168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:19:04.364842  674168 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 18:19:04.364939  674168 ssh_runner.go:195] Run: crio --version
	I0920 18:19:04.404023  674168 ssh_runner.go:195] Run: crio --version
	I0920 18:19:04.442587  674168 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 18:19:04.444061  674168 cli_runner.go:164] Run: docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:19:04.460165  674168 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 18:19:04.463995  674168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:19:04.474789  674168 kubeadm.go:883] updating cluster {Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:19:04.474919  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:19:04.474992  674168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:19:04.537318  674168 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:19:04.537404  674168 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:19:04.537459  674168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:19:04.571115  674168 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:19:04.571143  674168 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:19:04.571153  674168 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 18:19:04.571259  674168 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-162403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:19:04.571321  674168 ssh_runner.go:195] Run: crio config
	I0920 18:19:04.615201  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:19:04.615225  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:19:04.615237  674168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:19:04.615259  674168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-162403 NodeName:addons-162403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:19:04.615389  674168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-162403"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:19:04.615447  674168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:19:04.624504  674168 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:19:04.624568  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:19:04.633418  674168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 18:19:04.650496  674168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:19:04.667763  674168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 18:19:04.684808  674168 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 18:19:04.688259  674168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:19:04.698716  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:04.772157  674168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:19:04.785010  674168 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403 for IP: 192.168.49.2
	I0920 18:19:04.785034  674168 certs.go:194] generating shared ca certs ...
	I0920 18:19:04.785055  674168 certs.go:226] acquiring lock for ca certs: {Name:mk4b124302946da10a6534852cdb170d2c9fff4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.785184  674168 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key
	I0920 18:19:04.975314  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt ...
	I0920 18:19:04.975345  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt: {Name:mk70db283e13139496726ffe72d8d96dde32a822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.975559  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key ...
	I0920 18:19:04.975584  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key: {Name:mk35cfb4b8c77a9b5e50fcee25a6045ab52d6653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.975700  674168 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key
	I0920 18:19:05.060533  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt ...
	I0920 18:19:05.060567  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt: {Name:mk71caa95e512e49d5f0bbeb9669d49d06067538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.060774  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key ...
	I0920 18:19:05.060791  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key: {Name:mk48c17978eac1b6467fd589c3690dfaad357164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.060889  674168 certs.go:256] generating profile certs ...
	I0920 18:19:05.060964  674168 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key
	I0920 18:19:05.060984  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt with IP's: []
	I0920 18:19:05.132709  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt ...
	I0920 18:19:05.132744  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: {Name:mk43ea5dca75753d8d8a5367831467eeceb0fdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.132939  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key ...
	I0920 18:19:05.132959  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key: {Name:mk5d83dae2938d299506d1c5f284f55c2b17c66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.133062  674168 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af
	I0920 18:19:05.133090  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 18:19:05.307926  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af ...
	I0920 18:19:05.307962  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af: {Name:mkae84dcee0d54761655975153f0afe30c8c5174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.308152  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af ...
	I0920 18:19:05.308174  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af: {Name:mkf96ba0fb78917c3ee6f7335dc544ffcc5224ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.308277  674168 certs.go:381] copying /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af -> /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt
	I0920 18:19:05.308379  674168 certs.go:385] copying /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af -> /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key
	I0920 18:19:05.308461  674168 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key
	I0920 18:19:05.308486  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt with IP's: []
	I0920 18:19:05.434100  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt ...
	I0920 18:19:05.434142  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt: {Name:mk90e9baf01ada5513109eca2cf59bfe6b10cb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.434322  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key ...
	I0920 18:19:05.434336  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key: {Name:mk97b476f9ae1a8b6c97412a5ae795e7d133f43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.434511  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 18:19:05.434549  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:19:05.434571  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:19:05.434592  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem (1679 bytes)
	I0920 18:19:05.435207  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:19:05.458404  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 18:19:05.481726  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:19:05.504545  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:19:05.526862  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:19:05.548944  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:19:05.571483  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:19:05.593408  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:19:05.615754  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:19:05.638295  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:19:05.654802  674168 ssh_runner.go:195] Run: openssl version
	I0920 18:19:05.660087  674168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:19:05.669718  674168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.673149  674168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:19 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.673209  674168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.679642  674168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:19:05.689469  674168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:19:05.692656  674168 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:19:05.692709  674168 kubeadm.go:392] StartCluster: {Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:19:05.692807  674168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:19:05.692848  674168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:19:05.726380  674168 cri.go:89] found id: ""
	I0920 18:19:05.726441  674168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:19:05.734945  674168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:19:05.743371  674168 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 18:19:05.743434  674168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:19:05.751458  674168 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:19:05.751486  674168 kubeadm.go:157] found existing configuration files:
	
	I0920 18:19:05.751533  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:19:05.759587  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:19:05.759665  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:19:05.767587  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:19:05.775580  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:19:05.775632  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:19:05.783550  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:19:05.791364  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:19:05.791431  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:19:05.799115  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:19:05.806872  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:19:05.806937  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:19:05.814767  674168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 18:19:05.849981  674168 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:19:05.850038  674168 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:19:05.866359  674168 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 18:19:05.866451  674168 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0920 18:19:05.866546  674168 kubeadm.go:310] OS: Linux
	I0920 18:19:05.866606  674168 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 18:19:05.866650  674168 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 18:19:05.866698  674168 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 18:19:05.866761  674168 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 18:19:05.866832  674168 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 18:19:05.866901  674168 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 18:19:05.866960  674168 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 18:19:05.867073  674168 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 18:19:05.867141  674168 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 18:19:05.916092  674168 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:19:05.916231  674168 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:19:05.916371  674168 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:19:05.923502  674168 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:19:05.926743  674168 out.go:235]   - Generating certificates and keys ...
	I0920 18:19:05.926857  674168 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:19:05.926930  674168 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:19:06.037108  674168 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:19:06.230359  674168 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:19:06.324616  674168 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:19:06.546085  674168 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:19:06.884456  674168 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:19:06.884577  674168 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-162403 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:19:07.307543  674168 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:19:07.307735  674168 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-162403 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:19:07.569020  674168 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:19:07.702458  674168 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:19:07.850614  674168 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:19:07.850743  674168 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:19:07.903971  674168 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:19:08.053888  674168 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:19:08.422419  674168 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:19:08.545791  674168 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:19:08.627541  674168 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:19:08.627956  674168 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:19:08.631231  674168 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:19:08.633449  674168 out.go:235]   - Booting up control plane ...
	I0920 18:19:08.633578  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:19:08.633681  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:19:08.633775  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:19:08.645378  674168 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:19:08.650587  674168 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:19:08.650659  674168 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:19:08.727967  674168 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:19:08.728106  674168 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:19:09.229492  674168 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.337636ms
	I0920 18:19:09.229658  674168 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:19:13.730791  674168 kubeadm.go:310] [api-check] The API server is healthy after 4.501479968s
	I0920 18:19:13.742809  674168 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:19:13.755431  674168 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:19:13.774442  674168 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:19:13.774707  674168 kubeadm.go:310] [mark-control-plane] Marking the node addons-162403 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:19:13.782319  674168 kubeadm.go:310] [bootstrap-token] Using token: dfp0rr.g8klnxfszt90e7ou
	I0920 18:19:13.783826  674168 out.go:235]   - Configuring RBAC rules ...
	I0920 18:19:13.783941  674168 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:19:13.787166  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:19:13.793657  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:19:13.797189  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:19:13.799957  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:19:13.802629  674168 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:19:14.139197  674168 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:19:14.568490  674168 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:19:15.136897  674168 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:19:15.137714  674168 kubeadm.go:310] 
	I0920 18:19:15.137780  674168 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:19:15.137788  674168 kubeadm.go:310] 
	I0920 18:19:15.137863  674168 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:19:15.137873  674168 kubeadm.go:310] 
	I0920 18:19:15.137906  674168 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:19:15.138010  674168 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:19:15.138117  674168 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:19:15.138134  674168 kubeadm.go:310] 
	I0920 18:19:15.138208  674168 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:19:15.138217  674168 kubeadm.go:310] 
	I0920 18:19:15.138283  674168 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:19:15.138292  674168 kubeadm.go:310] 
	I0920 18:19:15.138391  674168 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:19:15.138525  674168 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:19:15.138624  674168 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:19:15.138640  674168 kubeadm.go:310] 
	I0920 18:19:15.138736  674168 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:19:15.138857  674168 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:19:15.138879  674168 kubeadm.go:310] 
	I0920 18:19:15.139024  674168 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dfp0rr.g8klnxfszt90e7ou \
	I0920 18:19:15.139190  674168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:891ba1fd40a1e235f359f18998838e7bbc84a16cf5d5bbb3fe5b65a2c5d30bae \
	I0920 18:19:15.139223  674168 kubeadm.go:310] 	--control-plane 
	I0920 18:19:15.139231  674168 kubeadm.go:310] 
	I0920 18:19:15.139332  674168 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:19:15.139342  674168 kubeadm.go:310] 
	I0920 18:19:15.139453  674168 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dfp0rr.g8klnxfszt90e7ou \
	I0920 18:19:15.139569  674168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:891ba1fd40a1e235f359f18998838e7bbc84a16cf5d5bbb3fe5b65a2c5d30bae 
	I0920 18:19:15.141419  674168 kubeadm.go:310] W0920 18:19:05.847423    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:19:15.141788  674168 kubeadm.go:310] W0920 18:19:05.848046    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:19:15.141998  674168 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0920 18:19:15.142142  674168 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:19:15.142176  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:19:15.142184  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:19:15.144217  674168 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:19:15.145705  674168 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:19:15.149559  674168 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:19:15.149575  674168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:19:15.167148  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:19:15.359568  674168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:19:15.359642  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:15.359669  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-162403 minikube.k8s.io/updated_at=2024_09_20T18_19_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-162403 minikube.k8s.io/primary=true
	I0920 18:19:15.367240  674168 ops.go:34] apiserver oom_adj: -16
	I0920 18:19:15.462349  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:15.963384  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:16.462821  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:16.962540  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:17.463154  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:17.962489  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:18.463105  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:18.962640  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:19.463445  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:19.546496  674168 kubeadm.go:1113] duration metric: took 4.186919442s to wait for elevateKubeSystemPrivileges
	I0920 18:19:19.546589  674168 kubeadm.go:394] duration metric: took 13.853885644s to StartCluster
	I0920 18:19:19.546618  674168 settings.go:142] acquiring lock: {Name:mk3858ba4d2318954bc9bdba2ebdd7d07c1af964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:19.546761  674168 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:19:19.547278  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/kubeconfig: {Name:mk211a7242c57e0384e62621e3b0b410c7b81ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:19.547568  674168 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:19:19.547588  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:19:19.547603  674168 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:19:19.547727  674168 addons.go:69] Setting cloud-spanner=true in profile "addons-162403"
	I0920 18:19:19.547739  674168 addons.go:69] Setting yakd=true in profile "addons-162403"
	I0920 18:19:19.547755  674168 addons.go:234] Setting addon cloud-spanner=true in "addons-162403"
	I0920 18:19:19.547765  674168 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-162403"
	I0920 18:19:19.547780  674168 addons.go:69] Setting metrics-server=true in profile "addons-162403"
	I0920 18:19:19.547793  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547804  674168 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-162403"
	I0920 18:19:19.547813  674168 addons.go:234] Setting addon metrics-server=true in "addons-162403"
	I0920 18:19:19.547819  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:19.547838  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547843  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547881  674168 addons.go:69] Setting storage-provisioner=true in profile "addons-162403"
	I0920 18:19:19.547898  674168 addons.go:234] Setting addon storage-provisioner=true in "addons-162403"
	I0920 18:19:19.547923  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.548013  674168 addons.go:69] Setting ingress=true in profile "addons-162403"
	I0920 18:19:19.548033  674168 addons.go:234] Setting addon ingress=true in "addons-162403"
	I0920 18:19:19.548078  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.548348  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548368  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548372  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548394  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548471  674168 addons.go:69] Setting default-storageclass=true in profile "addons-162403"
	I0920 18:19:19.548500  674168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-162403"
	I0920 18:19:19.548533  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548792  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.549033  674168 addons.go:69] Setting registry=true in profile "addons-162403"
	I0920 18:19:19.549061  674168 addons.go:234] Setting addon registry=true in "addons-162403"
	I0920 18:19:19.549095  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547756  674168 addons.go:234] Setting addon yakd=true in "addons-162403"
	I0920 18:19:19.549524  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.549550  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.549933  674168 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-162403"
	I0920 18:19:19.549957  674168 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-162403"
	I0920 18:19:19.550006  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550197  674168 addons.go:69] Setting ingress-dns=true in profile "addons-162403"
	I0920 18:19:19.550213  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550225  674168 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-162403"
	I0920 18:19:19.550238  674168 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-162403"
	I0920 18:19:19.550263  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.551201  674168 addons.go:69] Setting gcp-auth=true in profile "addons-162403"
	I0920 18:19:19.554213  674168 addons.go:69] Setting inspektor-gadget=true in profile "addons-162403"
	I0920 18:19:19.554281  674168 addons.go:69] Setting volcano=true in profile "addons-162403"
	I0920 18:19:19.554302  674168 addons.go:234] Setting addon volcano=true in "addons-162403"
	I0920 18:19:19.551386  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550214  674168 addons.go:234] Setting addon ingress-dns=true in "addons-162403"
	I0920 18:19:19.554827  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.554304  674168 addons.go:234] Setting addon inspektor-gadget=true in "addons-162403"
	I0920 18:19:19.555122  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.555478  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.555674  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.554221  674168 mustload.go:65] Loading cluster: addons-162403
	I0920 18:19:19.554183  674168 out.go:177] * Verifying Kubernetes components...
	I0920 18:19:19.556337  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:19.556799  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.554271  674168 addons.go:69] Setting volumesnapshots=true in profile "addons-162403"
	I0920 18:19:19.557261  674168 addons.go:234] Setting addon volumesnapshots=true in "addons-162403"
	I0920 18:19:19.557308  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.559052  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:19.569182  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.588210  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.588739  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.588904  674168 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:19:19.588992  674168 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:19:19.590309  674168 addons.go:234] Setting addon default-storageclass=true in "addons-162403"
	I0920 18:19:19.590370  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.590786  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:19:19.590802  674168 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:19:19.590864  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.590961  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.591935  674168 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:19:19.593751  674168 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:19:19.593775  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:19:19.593828  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.601351  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:19:19.601355  674168 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 18:19:19.601442  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:19.603687  674168 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:19:19.603717  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:19:19.603786  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.604025  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:19.608296  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:19:19.609371  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:19:19.610117  674168 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:19:19.610142  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:19:19.610211  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.612872  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:19:19.614205  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:19:19.615649  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:19:19.616930  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:19:19.618228  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:19:19.618357  674168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:19:19.619747  674168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:19:19.619771  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:19:19.619845  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.620114  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:19:19.624754  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:19:19.624710  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:19:19.624879  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:19:19.624952  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.628419  674168 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:19:19.628839  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:19:19.628880  674168 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:19:19.628974  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.629898  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:19:19.629920  674168 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:19:19.629986  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.635925  674168 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:19:19.635951  674168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:19:19.636128  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.638673  674168 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:19:19.638818  674168 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:19:19.641476  674168 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:19:19.641507  674168 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:19:19.641586  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.641902  674168 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:19:19.641918  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:19:19.641968  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.644063  674168 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:19:19.647042  674168 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:19:19.647066  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:19:19.647131  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.651090  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.672918  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.673246  674168 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-162403"
	I0920 18:19:19.673285  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.673746  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	W0920 18:19:19.674079  674168 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:19:19.680928  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.692356  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.699068  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.703084  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.708959  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.709724  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.710034  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.710800  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.716097  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.718252  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.725687  674168 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:19:19.727095  674168 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:19:19.728444  674168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:19:19.728469  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:19:19.728535  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.728936  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.756378  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.851667  674168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:19:19.851869  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:19:19.958165  674168 node_ready.go:35] waiting up to 6m0s for node "addons-162403" to be "Ready" ...
	I0920 18:19:20.049122  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:19:20.059225  674168 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:19:20.059328  674168 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:19:20.143656  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:19:20.143697  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:19:20.162533  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:19:20.248915  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:19:20.252979  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:19:20.253073  674168 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:19:20.253373  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:19:20.255477  674168 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:19:20.255545  674168 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:19:20.344657  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:19:20.344752  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:19:20.344997  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:19:20.347913  674168 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:19:20.347984  674168 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:19:20.361494  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:19:20.361598  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:19:20.443778  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:19:20.460111  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:19:20.460213  674168 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:19:20.466113  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:19:20.556027  674168 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:19:20.556125  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:19:20.562330  674168 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:19:20.562372  674168 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:19:20.644614  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:19:20.644712  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:19:20.645083  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:19:20.645155  674168 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:19:20.743572  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:19:20.743665  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:19:20.843761  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:19:20.863489  674168 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:19:20.863586  674168 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:19:20.866991  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:19:20.867029  674168 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:19:20.957725  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:19:20.957824  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:19:21.051014  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:19:21.051107  674168 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:19:21.146711  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:21.146794  674168 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:19:21.244660  674168 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:19:21.244769  674168 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:19:21.345912  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:19:21.345949  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:19:21.353497  674168 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:21.353530  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:19:21.443980  674168 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.592066127s)
	I0920 18:19:21.444142  674168 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 18:19:21.446954  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:19:21.447049  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:19:21.451328  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:21.556343  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:19:21.567862  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:21.643571  674168 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:19:21.643834  674168 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:19:21.857128  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:19:21.857204  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:19:21.970271  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:22.055373  674168 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-162403" context rescaled to 1 replicas
	I0920 18:19:22.254875  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:19:22.255007  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:19:22.351603  674168 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:19:22.351644  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:19:22.745266  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:19:22.745357  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:19:22.950177  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:19:22.950262  674168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:19:22.950772  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:19:22.958386  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.909152791s)
	I0920 18:19:23.143977  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:19:23.144014  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:19:23.344840  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:19:23.344947  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:19:23.463128  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:19:23.463229  674168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:19:23.654193  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:19:23.862111  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.699531854s)
	I0920 18:19:24.153748  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:25.659918  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.410895641s)
	I0920 18:19:25.659961  674168 addons.go:475] Verifying addon ingress=true in "addons-162403"
	I0920 18:19:25.659999  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.406542284s)
	I0920 18:19:25.660093  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.315030192s)
	I0920 18:19:25.660129  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.216231279s)
	I0920 18:19:25.660205  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.193997113s)
	I0920 18:19:25.660276  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.816413561s)
	I0920 18:19:25.660308  674168 addons.go:475] Verifying addon registry=true in "addons-162403"
	I0920 18:19:25.660382  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.208965825s)
	I0920 18:19:25.660442  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.104058168s)
	I0920 18:19:25.660445  674168 addons.go:475] Verifying addon metrics-server=true in "addons-162403"
	I0920 18:19:25.661699  674168 out.go:177] * Verifying registry addon...
	I0920 18:19:25.661755  674168 out.go:177] * Verifying ingress addon...
	I0920 18:19:25.661868  674168 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-162403 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:19:25.663738  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:19:25.664391  674168 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0920 18:19:25.668639  674168 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:19:25.668854  674168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:19:25.668871  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:25.768664  674168 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:19:25.768694  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.168189  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:26.168647  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.244777  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.676860398s)
	W0920 18:19:26.244890  674168 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:19:26.244939  674168 retry.go:31] will retry after 349.249211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:19:26.244988  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.294091803s)
	I0920 18:19:26.461459  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:26.574707  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.92045562s)
	I0920 18:19:26.574757  674168 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-162403"
	I0920 18:19:26.577367  674168 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:19:26.579563  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:19:26.582943  674168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:19:26.582960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:26.594681  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:26.683334  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:26.683674  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.858359  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:19:26.858435  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:26.875902  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:26.984458  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:19:27.001107  674168 addons.go:234] Setting addon gcp-auth=true in "addons-162403"
	I0920 18:19:27.001163  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:27.001520  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:27.018107  674168 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:19:27.018153  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:27.035342  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:27.083631  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:27.166744  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:27.168128  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:27.646290  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:27.669072  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:27.669418  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.084361  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:28.166640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.168138  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:28.462238  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:28.583099  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:28.667640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.667978  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:29.084266  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:29.167817  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:29.168604  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:29.271367  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.676631111s)
	I0920 18:19:29.271432  674168 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.253291372s)
	I0920 18:19:29.273273  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:29.274673  674168 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:19:29.276361  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:19:29.276382  674168 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:19:29.294783  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:19:29.294816  674168 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:19:29.345482  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:19:29.345506  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:19:29.363625  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:19:29.583445  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:29.667504  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:29.668067  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.065330  674168 addons.go:475] Verifying addon gcp-auth=true in "addons-162403"
	I0920 18:19:30.067623  674168 out.go:177] * Verifying gcp-auth addon...
	I0920 18:19:30.070321  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:19:30.073240  674168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:19:30.073265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:30.083449  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:30.167256  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:30.168040  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.574216  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:30.583194  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:30.667733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:30.668045  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.961659  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:31.073149  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:31.082855  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:31.168115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:31.168666  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:31.573991  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:31.582620  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:31.667824  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:31.668352  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:32.073266  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:32.082897  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:32.167779  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:32.168380  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:32.574170  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:32.582879  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:32.667250  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:32.667809  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:33.074390  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:33.083130  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:33.168052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:33.168329  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:33.461572  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:33.574511  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:33.582999  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:33.667656  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:33.668054  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:34.073228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:34.082952  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:34.168374  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:34.169326  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:34.573898  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:34.583235  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:34.666598  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:34.667851  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.074529  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:35.083233  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:35.166658  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:35.167884  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.573980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:35.582504  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:35.667399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:35.667855  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.960967  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:36.073874  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:36.083242  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:36.166883  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:36.168404  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:36.574240  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:36.582733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:36.667467  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:36.667953  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.073902  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:37.082616  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:37.167641  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:37.167921  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.573766  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:37.583480  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:37.666947  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:37.667458  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.961890  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:38.073945  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:38.082640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:38.167284  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:38.167840  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:38.574639  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:38.583506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:38.667337  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:38.667789  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:39.073649  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:39.084058  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:39.167781  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:39.168107  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:39.574163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:39.583050  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:39.666763  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:39.668155  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:40.073200  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:40.082825  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:40.167592  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:40.168195  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:40.461680  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:40.573622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:40.583124  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:40.666705  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:40.667590  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:41.073798  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:41.083878  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:41.167259  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:41.167696  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:41.573769  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:41.583407  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:41.667187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:41.667621  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.073956  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:42.082469  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:42.167268  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:42.167773  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.573883  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:42.582802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:42.667181  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:42.667648  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.960976  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:43.073526  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:43.083195  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:43.167541  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:43.168076  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:43.574500  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:43.583094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:43.667526  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:43.667955  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.073938  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:44.082232  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:44.167119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:44.168254  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.573757  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:44.583299  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:44.666525  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:44.668092  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.961566  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:45.074296  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:45.083265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:45.166731  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:45.167803  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:45.573582  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:45.583070  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:45.666718  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:45.667763  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:46.074393  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:46.083026  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:46.167896  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:46.168469  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:46.573951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:46.582611  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:46.667417  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:46.667835  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:47.074391  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:47.083342  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:47.167582  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:47.168016  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:47.461559  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:47.573674  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:47.583550  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:47.667101  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:47.668093  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:48.074385  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:48.083357  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:48.166820  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:48.168052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:48.574056  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:48.583138  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:48.667700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:48.668170  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.073954  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:49.082550  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:49.167253  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:49.167689  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.573924  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:49.582493  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:49.667268  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:49.667713  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.961127  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:50.074222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:50.082751  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:50.167446  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:50.167837  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:50.573975  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:50.582446  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:50.667144  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:50.667725  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.073776  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:51.083555  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:51.167603  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:51.168082  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.573207  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:51.582872  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:51.667933  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:51.668639  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.961792  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:52.073650  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:52.083774  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:52.167240  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:52.167803  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:52.574175  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:52.583088  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:52.667593  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:52.668073  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:53.074115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:53.082843  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:53.167552  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:53.168250  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:53.574203  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:53.583096  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:53.666775  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:53.668043  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:54.073577  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:54.083165  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:54.166822  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:54.168120  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:54.461639  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:54.573485  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:54.583094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:54.667881  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:54.668272  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:55.074459  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:55.083676  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:55.167036  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:55.168063  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:55.574347  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:55.583185  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:55.666614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:55.668023  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.074436  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:56.083017  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:56.167739  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:56.168067  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.574141  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:56.582595  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:56.667193  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:56.667702  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.961306  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:57.073951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:57.082426  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:57.167036  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:57.167619  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:57.574066  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:57.582553  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:57.667363  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:57.667862  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:58.074286  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:58.083053  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:58.168080  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:58.168562  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:58.574033  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:58.582834  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:58.667744  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:58.667977  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:59.074041  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:59.084503  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:59.167532  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:59.167866  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:59.461351  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:59.574055  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:59.582662  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:59.667606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:59.668345  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:00.074001  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:00.082537  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:00.167389  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:00.167781  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:00.573646  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:00.583513  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:00.667237  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:00.667751  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.074614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:01.083606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:01.167425  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:01.167849  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.574159  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:01.582763  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:01.667525  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:01.667967  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.961782  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:20:02.073687  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:02.083273  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:02.167793  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:02.168126  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:02.573951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:02.582489  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:02.667286  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:02.667673  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.074061  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:03.083043  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:03.167741  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:03.168186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.574298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:03.583319  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:03.667171  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:03.667926  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.963598  674168 node_ready.go:49] node "addons-162403" has status "Ready":"True"
	I0920 18:20:03.963697  674168 node_ready.go:38] duration metric: took 44.005491387s for node "addons-162403" to be "Ready" ...
	I0920 18:20:03.963739  674168 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:20:03.975991  674168 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:04.073640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:04.083934  674168 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:20:04.083964  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:04.166878  674168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:20:04.166911  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:04.168046  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:04.574414  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:04.584293  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:04.668383  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:04.668692  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.077146  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:05.176605  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:05.176677  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.176971  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:05.574207  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:05.583569  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:05.668257  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:05.668609  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.982730  674168 pod_ready.go:93] pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.982753  674168 pod_ready.go:82] duration metric: took 2.006720801s for pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.982772  674168 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.987525  674168 pod_ready.go:93] pod "etcd-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.987550  674168 pod_ready.go:82] duration metric: took 4.771792ms for pod "etcd-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.987564  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.992095  674168 pod_ready.go:93] pod "kube-apiserver-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.992119  674168 pod_ready.go:82] duration metric: took 4.547516ms for pod "kube-apiserver-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.992133  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.996705  674168 pod_ready.go:93] pod "kube-controller-manager-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.996728  674168 pod_ready.go:82] duration metric: took 4.58678ms for pod "kube-controller-manager-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.996742  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dd8cb" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.001096  674168 pod_ready.go:93] pod "kube-proxy-dd8cb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:06.001119  674168 pod_ready.go:82] duration metric: took 4.367688ms for pod "kube-proxy-dd8cb" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.001128  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.074611  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:06.084485  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:06.167894  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:06.168247  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:06.380446  674168 pod_ready.go:93] pod "kube-scheduler-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:06.380470  674168 pod_ready.go:82] duration metric: took 379.335122ms for pod "kube-scheduler-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.380483  674168 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.573654  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:06.583209  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:06.669465  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:06.669865  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:07.074546  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:07.146700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:07.168630  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:07.168936  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:07.574572  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:07.646002  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:07.668560  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:07.669087  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:08.074484  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:08.147135  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:08.168492  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:08.169815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:08.387061  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.573949  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:08.583549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:08.668848  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:08.669952  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:09.075164  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:09.085141  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:09.168450  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:09.168903  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:09.573956  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:09.584733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:09.668231  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:09.668811  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.074046  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:10.084317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:10.167605  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:10.168539  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.573990  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:10.584073  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:10.668505  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:10.668657  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.886466  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:11.074057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:11.083511  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:11.168156  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:11.168499  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:11.574454  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:11.584057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:11.667749  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:11.668163  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:12.074025  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:12.083478  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:12.167917  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:12.168149  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:12.573943  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:12.583638  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:12.667916  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:12.668188  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.074028  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:13.084332  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:13.167761  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:13.168109  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.385693  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.574062  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:13.675513  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:13.675988  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.676028  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.074341  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:14.083682  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.167388  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:14.168157  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:14.574641  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:14.584170  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.667163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:14.668186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:15.074157  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:15.083952  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:15.167738  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:15.168230  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:15.386551  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.573791  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:15.583941  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:15.667622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:15.667966  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:16.074020  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:16.083830  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:16.167948  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:16.168175  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:16.574271  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:16.583559  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:16.668115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:16.668332  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.074273  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:17.083969  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:17.167218  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:17.168238  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.574490  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:17.584137  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:17.667428  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:17.667780  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.886239  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:18.074428  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:18.084227  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:18.167720  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:18.168760  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:18.574681  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:18.583878  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:18.667539  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:18.668689  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.074506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:19.085322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:19.167619  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:19.168781  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.574399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:19.584366  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:19.668321  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:19.669055  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.886419  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.074661  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:20.084728  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:20.170023  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:20.170213  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:20.574364  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:20.583499  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:20.667708  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:20.668118  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:21.074066  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:21.085062  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:21.167396  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:21.167749  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:21.573957  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:21.583844  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:21.675451  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:21.675661  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:22.073998  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:22.083732  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:22.169529  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:22.170522  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:22.386803  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.573870  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:22.584705  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:22.667943  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:22.668186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:23.074421  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:23.175976  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:23.176483  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:23.176697  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:23.575070  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:23.584072  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:23.667372  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:23.668676  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.074257  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:24.083644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:24.168228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:24.168815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.574187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:24.583351  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:24.667456  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:24.668620  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.886478  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.073866  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:25.084524  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:25.168018  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:25.168513  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:25.574841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:25.584539  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:25.667916  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:25.668455  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:26.074005  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:26.084351  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:26.167815  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:26.168130  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:26.573373  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:26.583700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:26.667912  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:26.668223  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:27.075963  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:27.084215  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:27.167448  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:27.168236  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:27.385536  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.574802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:27.584026  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:27.667459  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:27.667865  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:28.074427  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:28.083549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:28.168099  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:28.168307  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:28.573283  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:28.583651  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:28.669993  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:28.670558  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:29.074299  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:29.083891  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:29.167292  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:29.168790  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:29.386904  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.574248  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:29.584292  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:29.667547  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:29.668470  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:30.073583  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:30.084840  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:30.168291  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:30.168832  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:30.573644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:30.583792  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:30.667979  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:30.668523  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.073798  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:31.088101  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:31.167412  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:31.168798  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.574592  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:31.584104  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:31.676242  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:31.676685  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.886012  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.074267  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:32.083949  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:32.167984  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:32.168035  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:32.573758  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:32.584399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:32.667787  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:32.668680  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.073761  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:33.084622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:33.168481  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:33.169015  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.574492  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:33.584349  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:33.668163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:33.668466  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.886108  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.074298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:34.090815  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:34.168228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:34.168607  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:34.574304  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:34.583500  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:34.667921  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:34.668346  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:35.074222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:35.083544  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:35.168115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:35.168346  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:35.574453  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:35.583475  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:35.668056  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:35.668420  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:36.074656  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:36.084839  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:36.175775  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:36.176052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:36.385161  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.573863  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:36.583168  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:36.667584  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:36.667932  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:37.074532  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:37.084050  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:37.167729  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:37.168857  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:37.575013  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:37.584903  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:37.667711  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:37.670115  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.148918  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:38.150092  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:38.170322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:38.171681  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.449562  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.647846  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:38.650638  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:38.671119  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.671851  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.073841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:39.084303  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:39.168201  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:39.168689  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.574832  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:39.584265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:39.668057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.668652  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.075222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:40.084398  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:40.169659  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:40.169875  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.573922  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:40.585047  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:40.667391  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:40.668328  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.885859  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.074071  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:41.084506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:41.167576  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:41.168542  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:41.574344  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:41.584143  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:41.667456  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:41.669612  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:42.074595  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:42.086313  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:42.167749  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:42.168802  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:42.574390  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:42.584540  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:42.668039  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:42.668168  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:43.074796  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:43.084081  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:43.175684  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:43.176316  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:43.387608  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.574180  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:43.583921  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:43.668317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:43.668557  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:44.074438  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:44.083995  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:44.175579  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:44.175990  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:44.574794  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:44.584211  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:44.667783  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:44.668012  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.075097  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:45.083848  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:45.167219  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:45.168396  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.574035  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:45.583614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:45.667959  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:45.668489  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.886260  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.074149  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:46.084051  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:46.168119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:46.168348  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:46.574489  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:46.583340  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:46.667980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:46.668074  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.073991  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:47.084011  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:47.167606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:47.167975  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.574409  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:47.584322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:47.667960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:47.668234  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.887147  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.074367  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:48.083559  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:48.168314  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:48.168688  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:48.574112  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:48.583378  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:48.667783  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:48.668071  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.074306  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:49.084220  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:49.167938  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:49.168189  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.574906  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:49.583879  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:49.667488  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:49.667893  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.887236  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.073693  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:50.084184  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:50.167541  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:50.168046  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:50.573701  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:50.584183  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:50.667813  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:50.668089  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.074194  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:51.083534  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:51.168108  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:51.168510  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.574767  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:51.584409  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:51.667685  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:51.668584  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.887461  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.074272  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:52.084298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:52.167622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:52.168343  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:52.574802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:52.585518  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:52.667629  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:52.668294  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:53.074044  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:53.085119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:53.167794  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:53.167902  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:53.574468  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:53.584721  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:53.668152  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:53.668429  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:54.074187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:54.083549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:54.167885  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:54.168463  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:54.386319  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.574862  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:54.584077  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:54.667752  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:54.668059  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:55.074806  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:55.083967  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:55.167246  674168 kapi.go:107] duration metric: took 1m29.503507069s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:20:55.168254  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:55.573690  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:55.584989  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:55.669563  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:56.159319  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:56.159900  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:56.244905  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:56.449078  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:56.574644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:56.584810  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:56.668815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:57.151274  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:57.151865  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:57.245823  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:57.648547  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:57.650051  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:57.747751  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.147934  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:58.148674  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:58.170132  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.573817  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:58.585119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:58.668821  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.886841  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.074016  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:59.083075  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:59.169176  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:59.573960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:59.586741  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:59.669373  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:00.074322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:00.084055  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:00.168452  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:00.573877  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:00.584075  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:00.669220  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:01.074453  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:01.084094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:01.169161  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:01.386983  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.574518  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:01.584431  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:01.668575  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:02.074725  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:02.084554  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:02.169021  674168 kapi.go:107] duration metric: took 1m36.504626828s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:21:02.573607  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:02.584400  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:03.074502  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:03.084128  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:03.387306  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.574624  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:03.583947  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:04.074010  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:04.085435  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:04.574841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:04.584904  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:05.074160  674168 kapi.go:107] duration metric: took 1m35.003835312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:21:05.076015  674168 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-162403 cluster.
	I0920 18:21:05.077316  674168 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:21:05.078763  674168 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:21:05.085221  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:05.387394  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:05.584431  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:06.084576  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:06.646888  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.085163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.584837  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.887115  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.146524  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:08.584317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:09.083918  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:09.584467  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:10.083578  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:10.386767  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.585465  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:11.084980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:11.585791  674168 kapi.go:107] duration metric: took 1m45.006228088s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:21:11.587570  674168 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0920 18:21:11.588892  674168 addons.go:510] duration metric: took 1m52.041283386s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0920 18:21:12.886529  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:14.886947  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.386798  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.886426  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.387024  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.886306  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.886543  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.887497  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.386454  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.886042  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.886898  674168 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"True"
	I0920 18:21:34.886922  674168 pod_ready.go:82] duration metric: took 1m28.50643262s for pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.886933  674168 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.891249  674168 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace has status "Ready":"True"
	I0920 18:21:34.891272  674168 pod_ready.go:82] duration metric: took 4.331899ms for pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.891290  674168 pod_ready.go:39] duration metric: took 1m30.927531806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:21:34.891322  674168 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:21:34.891383  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.891454  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.925385  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:34.925415  674168 cri.go:89] found id: ""
	I0920 18:21:34.925427  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:34.925481  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:34.928881  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.928961  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.961773  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:34.961796  674168 cri.go:89] found id: ""
	I0920 18:21:34.961806  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:34.961860  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:34.965452  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.965512  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.997902  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:34.997922  674168 cri.go:89] found id: ""
	I0920 18:21:34.997930  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:34.997971  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.001467  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:35.001538  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:35.033709  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:35.033737  674168 cri.go:89] found id: ""
	I0920 18:21:35.033747  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:35.033796  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.037117  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:35.037188  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:35.070146  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:35.070171  674168 cri.go:89] found id: ""
	I0920 18:21:35.070180  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:35.070232  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.073666  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:35.073742  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:35.106480  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:35.106505  674168 cri.go:89] found id: ""
	I0920 18:21:35.106515  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:35.106579  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.109930  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:35.110001  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:35.143353  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:35.143373  674168 cri.go:89] found id: ""
	I0920 18:21:35.143382  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:35.143450  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.147158  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:35.147183  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:35.186573  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:35.186608  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:35.219833  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:35.219859  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:35.296767  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:35.296802  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:35.374733  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:35.374783  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:35.397401  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:35.397441  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:35.439718  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:35.439747  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:35.481086  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:35.481119  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:35.515899  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:35.515944  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:35.614907  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:35.614941  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:35.669956  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:35.669994  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:35.705242  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:35.705275  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:38.247127  674168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:38.261085  674168 api_server.go:72] duration metric: took 2m18.713476022s to wait for apiserver process to appear ...
	I0920 18:21:38.261112  674168 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:21:38.261153  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:38.261198  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:38.294652  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:38.294675  674168 cri.go:89] found id: ""
	I0920 18:21:38.294683  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:38.294728  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.297926  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:38.298005  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:38.330857  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:38.330877  674168 cri.go:89] found id: ""
	I0920 18:21:38.330887  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:38.330948  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.334140  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:38.334194  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:38.367218  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:38.367245  674168 cri.go:89] found id: ""
	I0920 18:21:38.367252  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:38.367293  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.370531  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:38.370590  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:38.403339  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:38.403370  674168 cri.go:89] found id: ""
	I0920 18:21:38.403378  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:38.403433  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.406801  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:38.406872  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:38.439882  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:38.439903  674168 cri.go:89] found id: ""
	I0920 18:21:38.439912  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:38.439969  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.443320  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:38.443402  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:38.476678  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:38.476703  674168 cri.go:89] found id: ""
	I0920 18:21:38.476712  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:38.476769  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.479997  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:38.480061  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:38.515213  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:38.515238  674168 cri.go:89] found id: ""
	I0920 18:21:38.515246  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:38.515302  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.518573  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:38.518593  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:38.574209  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:38.574251  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:38.652350  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:38.652388  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:38.674362  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:38.674398  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:38.718009  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:38.718043  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:38.759722  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:38.759754  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:38.796446  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:38.796475  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:38.840305  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:38.840344  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:38.940656  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:38.940691  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:38.974579  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:38.974605  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:39.009360  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:39.009388  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:39.081734  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:39.081781  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:41.622849  674168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 18:21:41.627422  674168 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 18:21:41.628424  674168 api_server.go:141] control plane version: v1.31.1
	I0920 18:21:41.628450  674168 api_server.go:131] duration metric: took 3.367330033s to wait for apiserver health ...
	I0920 18:21:41.628460  674168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:21:41.628488  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:41.628545  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:41.661458  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:41.661477  674168 cri.go:89] found id: ""
	I0920 18:21:41.661485  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:41.661531  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.664866  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:41.664947  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:41.699349  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:41.699374  674168 cri.go:89] found id: ""
	I0920 18:21:41.699391  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:41.699448  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.702834  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:41.702894  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:41.736614  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:41.736638  674168 cri.go:89] found id: ""
	I0920 18:21:41.736648  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:41.736696  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.740481  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:41.740540  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:41.775612  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:41.775636  674168 cri.go:89] found id: ""
	I0920 18:21:41.775644  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:41.775692  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.779048  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:41.779108  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:41.811224  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:41.811253  674168 cri.go:89] found id: ""
	I0920 18:21:41.811261  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:41.811313  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.814683  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:41.814756  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:41.847730  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:41.847751  674168 cri.go:89] found id: ""
	I0920 18:21:41.847761  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:41.847811  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.851164  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:41.851221  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:41.885935  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:41.885956  674168 cri.go:89] found id: ""
	I0920 18:21:41.885964  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:41.886013  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.889575  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:41.889598  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:41.924023  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:41.924054  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:41.957638  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:41.957665  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:42.013803  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:42.013840  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:42.052343  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:42.052375  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:42.135981  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:42.136020  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:42.164238  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:42.164272  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:42.365506  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:42.365547  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:42.460595  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:42.460631  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:42.502829  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:42.502868  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:42.557032  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:42.557069  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:42.629398  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:42.629442  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:45.182962  674168 system_pods.go:59] 18 kube-system pods found
	I0920 18:21:45.183040  674168 system_pods.go:61] "coredns-7c65d6cfc9-24mgs" [ec3e74ab-0ca2-4944-a0ba-ab3e2e552a1f] Running
	I0920 18:21:45.183051  674168 system_pods.go:61] "csi-hostpath-attacher-0" [057910a4-ea07-40ab-9129-a3c79903a5f9] Running
	I0920 18:21:45.183057  674168 system_pods.go:61] "csi-hostpath-resizer-0" [2a6e070f-8f67-46ad-8e2e-e738b9224362] Running
	I0920 18:21:45.183062  674168 system_pods.go:61] "csi-hostpathplugin-hgq4x" [d78d2043-38be-4774-a4e1-8f366b694e3f] Running
	I0920 18:21:45.183069  674168 system_pods.go:61] "etcd-addons-162403" [cd967cd6-498a-436c-8ebf-10e541085240] Running
	I0920 18:21:45.183078  674168 system_pods.go:61] "kindnet-j7fr4" [300d7753-4ee6-44db-818d-fdb1f602488b] Running
	I0920 18:21:45.183085  674168 system_pods.go:61] "kube-apiserver-addons-162403" [057055c6-3f96-4763-b006-b61092360aef] Running
	I0920 18:21:45.183094  674168 system_pods.go:61] "kube-controller-manager-addons-162403" [84fb95f0-0529-4bd3-8dd5-457189ef56cc] Running
	I0920 18:21:45.183101  674168 system_pods.go:61] "kube-ingress-dns-minikube" [254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5] Running
	I0920 18:21:45.183110  674168 system_pods.go:61] "kube-proxy-dd8cb" [3cac319c-9057-4e29-ae2c-fb7870227b4b] Running
	I0920 18:21:45.183116  674168 system_pods.go:61] "kube-scheduler-addons-162403" [aa393b8a-49e5-4aba-bb03-3843d62ed2d2] Running
	I0920 18:21:45.183122  674168 system_pods.go:61] "metrics-server-84c5f94fbc-gr2ct" [aadc0160-94e3-4273-9d42-d0552af7ad61] Running
	I0920 18:21:45.183129  674168 system_pods.go:61] "nvidia-device-plugin-daemonset-vkrvk" [e7dcaefe-b427-4947-b9f7-651ee1b219f8] Running
	I0920 18:21:45.183137  674168 system_pods.go:61] "registry-66c9cd494c-b4j85" [88d02c55-38b5-4e2b-9986-5f7887226e63] Running
	I0920 18:21:45.183144  674168 system_pods.go:61] "registry-proxy-x8xl5" [22fc174a-6a59-45df-b8e0-fd97f697901c] Running
	I0920 18:21:45.183152  674168 system_pods.go:61] "snapshot-controller-56fcc65765-pdqqq" [a8386b62-336b-4071-af36-a2737b7f6933] Running
	I0920 18:21:45.183158  674168 system_pods.go:61] "snapshot-controller-56fcc65765-qx6cd" [369755b7-0a45-437e-93c3-c52c7bc63bfd] Running
	I0920 18:21:45.183165  674168 system_pods.go:61] "storage-provisioner" [f20bb24d-0c61-4464-93b8-2f32abbe2465] Running
	I0920 18:21:45.183175  674168 system_pods.go:74] duration metric: took 3.554706193s to wait for pod list to return data ...
	I0920 18:21:45.183191  674168 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:21:45.185616  674168 default_sa.go:45] found service account: "default"
	I0920 18:21:45.185637  674168 default_sa.go:55] duration metric: took 2.436616ms for default service account to be created ...
	I0920 18:21:45.185645  674168 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:21:45.193659  674168 system_pods.go:86] 18 kube-system pods found
	I0920 18:21:45.193684  674168 system_pods.go:89] "coredns-7c65d6cfc9-24mgs" [ec3e74ab-0ca2-4944-a0ba-ab3e2e552a1f] Running
	I0920 18:21:45.193693  674168 system_pods.go:89] "csi-hostpath-attacher-0" [057910a4-ea07-40ab-9129-a3c79903a5f9] Running
	I0920 18:21:45.193697  674168 system_pods.go:89] "csi-hostpath-resizer-0" [2a6e070f-8f67-46ad-8e2e-e738b9224362] Running
	I0920 18:21:45.193700  674168 system_pods.go:89] "csi-hostpathplugin-hgq4x" [d78d2043-38be-4774-a4e1-8f366b694e3f] Running
	I0920 18:21:45.193704  674168 system_pods.go:89] "etcd-addons-162403" [cd967cd6-498a-436c-8ebf-10e541085240] Running
	I0920 18:21:45.193708  674168 system_pods.go:89] "kindnet-j7fr4" [300d7753-4ee6-44db-818d-fdb1f602488b] Running
	I0920 18:21:45.193712  674168 system_pods.go:89] "kube-apiserver-addons-162403" [057055c6-3f96-4763-b006-b61092360aef] Running
	I0920 18:21:45.193715  674168 system_pods.go:89] "kube-controller-manager-addons-162403" [84fb95f0-0529-4bd3-8dd5-457189ef56cc] Running
	I0920 18:21:45.193719  674168 system_pods.go:89] "kube-ingress-dns-minikube" [254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5] Running
	I0920 18:21:45.193723  674168 system_pods.go:89] "kube-proxy-dd8cb" [3cac319c-9057-4e29-ae2c-fb7870227b4b] Running
	I0920 18:21:45.193726  674168 system_pods.go:89] "kube-scheduler-addons-162403" [aa393b8a-49e5-4aba-bb03-3843d62ed2d2] Running
	I0920 18:21:45.193730  674168 system_pods.go:89] "metrics-server-84c5f94fbc-gr2ct" [aadc0160-94e3-4273-9d42-d0552af7ad61] Running
	I0920 18:21:45.193733  674168 system_pods.go:89] "nvidia-device-plugin-daemonset-vkrvk" [e7dcaefe-b427-4947-b9f7-651ee1b219f8] Running
	I0920 18:21:45.193737  674168 system_pods.go:89] "registry-66c9cd494c-b4j85" [88d02c55-38b5-4e2b-9986-5f7887226e63] Running
	I0920 18:21:45.193741  674168 system_pods.go:89] "registry-proxy-x8xl5" [22fc174a-6a59-45df-b8e0-fd97f697901c] Running
	I0920 18:21:45.193744  674168 system_pods.go:89] "snapshot-controller-56fcc65765-pdqqq" [a8386b62-336b-4071-af36-a2737b7f6933] Running
	I0920 18:21:45.193749  674168 system_pods.go:89] "snapshot-controller-56fcc65765-qx6cd" [369755b7-0a45-437e-93c3-c52c7bc63bfd] Running
	I0920 18:21:45.193755  674168 system_pods.go:89] "storage-provisioner" [f20bb24d-0c61-4464-93b8-2f32abbe2465] Running
	I0920 18:21:45.193761  674168 system_pods.go:126] duration metric: took 8.110899ms to wait for k8s-apps to be running ...
	I0920 18:21:45.193769  674168 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:21:45.193838  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:21:45.204913  674168 system_svc.go:56] duration metric: took 11.134209ms WaitForService to wait for kubelet
	I0920 18:21:45.204952  674168 kubeadm.go:582] duration metric: took 2m25.657338244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:21:45.204980  674168 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:21:45.208110  674168 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 18:21:45.208138  674168 node_conditions.go:123] node cpu capacity is 8
	I0920 18:21:45.208151  674168 node_conditions.go:105] duration metric: took 3.164779ms to run NodePressure ...
	I0920 18:21:45.208162  674168 start.go:241] waiting for startup goroutines ...
	I0920 18:21:45.208172  674168 start.go:246] waiting for cluster config update ...
	I0920 18:21:45.208187  674168 start.go:255] writing updated cluster config ...
	I0920 18:21:45.208459  674168 ssh_runner.go:195] Run: rm -f paused
	I0920 18:21:45.256980  674168 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:21:45.259386  674168 out.go:177] * Done! kubectl is now configured to use "addons-162403" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:32:37 addons-162403 crio[1027]: time="2024-09-20 18:32:37.135080915Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-tvlhv/hello-world-app" id=539bc7bb-e8a2-4e48-a2a3-80c748c3ca58 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 20 18:32:37 addons-162403 crio[1027]: time="2024-09-20 18:32:37.135197358Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 20 18:32:37 addons-162403 crio[1027]: time="2024-09-20 18:32:37.150145688Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/84cd3dacc812edb3c4e7952e195c12055f468130ecaae892c67dfe2dfc80249e/merged/etc/passwd: no such file or directory"
	Sep 20 18:32:37 addons-162403 crio[1027]: time="2024-09-20 18:32:37.150183036Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/84cd3dacc812edb3c4e7952e195c12055f468130ecaae892c67dfe2dfc80249e/merged/etc/group: no such file or directory"
	Sep 20 18:32:37 addons-162403 crio[1027]: time="2024-09-20 18:32:37.185456078Z" level=info msg="Created container 83a739692c73d510e40d3feff073e50cf9cc4ead5bd941da19a3ded69df8fbbd: default/hello-world-app-55bf9c44b4-tvlhv/hello-world-app" id=539bc7bb-e8a2-4e48-a2a3-80c748c3ca58 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 20 18:32:37 addons-162403 crio[1027]: time="2024-09-20 18:32:37.186123002Z" level=info msg="Starting container: 83a739692c73d510e40d3feff073e50cf9cc4ead5bd941da19a3ded69df8fbbd" id=3d04b8d1-ba35-454e-91b4-975b887059f2 name=/runtime.v1.RuntimeService/StartContainer
	Sep 20 18:32:37 addons-162403 crio[1027]: time="2024-09-20 18:32:37.192140007Z" level=info msg="Started container" PID=11344 containerID=83a739692c73d510e40d3feff073e50cf9cc4ead5bd941da19a3ded69df8fbbd description=default/hello-world-app-55bf9c44b4-tvlhv/hello-world-app id=3d04b8d1-ba35-454e-91b4-975b887059f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b71637a3e61ec0894c3f02f235519d265bf8b17b9842052ac5f8f3c7bd787652
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.415926673Z" level=warning msg="Stopping container 4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=e322cd2c-7873-4260-9215-034a5dfe006e name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:32:38 addons-162403 conmon[5350]: conmon 4e295275fa2d8b4b52d8 <ninfo>: container 5362 exited with status 137
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.549399554Z" level=info msg="Stopped container 4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74: ingress-nginx/ingress-nginx-controller-bc57996ff-wxtns/controller" id=e322cd2c-7873-4260-9215-034a5dfe006e name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.549982289Z" level=info msg="Stopping pod sandbox: 09e264dfd2260c4fd2c3565f7064a586e38d3e3b07aa3924530786683eb1f13f" id=1bcdb5ac-f10e-499b-a12f-62d91133f3b3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.553386541Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-4SI7UYJFV3VI6VVJ - [0:0]\n:KUBE-HP-BGP27PFMIVWJTECO - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-4SI7UYJFV3VI6VVJ\n-X KUBE-HP-BGP27PFMIVWJTECO\nCOMMIT\n"
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.554837491Z" level=info msg="Closing host port tcp:80"
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.554889155Z" level=info msg="Closing host port tcp:443"
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.556326510Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.556347725Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.556545438Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-wxtns Namespace:ingress-nginx ID:09e264dfd2260c4fd2c3565f7064a586e38d3e3b07aa3924530786683eb1f13f UID:682363dd-9574-4aa6-b0df-2d77ce4696a9 NetNS:/var/run/netns/38f93dea-a838-49bf-b04a-9023c59b3148 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.556709536Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-wxtns from CNI network \"kindnet\" (type=ptp)"
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.608484218Z" level=info msg="Stopped pod sandbox: 09e264dfd2260c4fd2c3565f7064a586e38d3e3b07aa3924530786683eb1f13f" id=1bcdb5ac-f10e-499b-a12f-62d91133f3b3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.878569382Z" level=info msg="Removing container: 4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74" id=3e69999f-0b44-45b7-8895-49e6dff3697e name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 18:32:38 addons-162403 crio[1027]: time="2024-09-20 18:32:38.892429359Z" level=info msg="Removed container 4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74: ingress-nginx/ingress-nginx-controller-bc57996ff-wxtns/controller" id=3e69999f-0b44-45b7-8895-49e6dff3697e name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 18:32:40 addons-162403 crio[1027]: time="2024-09-20 18:32:40.446696997Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2957c4c2-ee67-413b-93bb-4f0c2fa8400d name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:32:40 addons-162403 crio[1027]: time="2024-09-20 18:32:40.446991452Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2957c4c2-ee67-413b-93bb-4f0c2fa8400d name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:32:40 addons-162403 crio[1027]: time="2024-09-20 18:32:40.447893582Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a6895b10-57a5-448a-8dbc-1ccb23e7ba7b name=/runtime.v1.ImageService/PullImage
	Sep 20 18:32:40 addons-162403 crio[1027]: time="2024-09-20 18:32:40.452524496Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	83a739692c73d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        6 seconds ago       Running             hello-world-app           0                   b71637a3e61ec       hello-world-app-55bf9c44b4-tvlhv
	c25d2267ccfdd       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   6293a840e0f65       nginx
	167a7699d2ad7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   cff5d95699f6e       gcp-auth-89d5ffd79-742xn
	5e23c290c5292       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              patch                     0                   13a96ce846052       ingress-nginx-admission-patch-8jqwt
	5d495cc4d007d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             11 minutes ago      Running             local-path-provisioner    0                   2a8ec889be7b5       local-path-provisioner-86d989889c-v5k84
	acca616b5cd64       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   6a8890c1b1e3b       metrics-server-84c5f94fbc-gr2ct
	2cb6bb0b06bb3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   fe30db5645655       ingress-nginx-admission-create-ct9rs
	cdb59912f2e14       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   10529a41c309c       coredns-7c65d6cfc9-24mgs
	525f045aa748e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   612ce81908c78       storage-provisioner
	0a3bc23a91121       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                             13 minutes ago      Running             kindnet-cni               0                   7f6e1d53fda98       kindnet-j7fr4
	52c52923ef8ea       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   ae303bad1ebff       kube-proxy-dd8cb
	4b71192f65f2d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   2a41178034cbc       kube-controller-manager-addons-162403
	249ac20417667       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   e48f7866753bd       kube-scheduler-addons-162403
	c4ad43014a83b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   bdef69edf9acd       etcd-addons-162403
	f38c04f167d00       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   c3f039afa24e9       kube-apiserver-addons-162403
	
	
	==> coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] <==
	[INFO] 10.244.0.18:52396 - 19852 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128047s
	[INFO] 10.244.0.18:44347 - 60145 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068286s
	[INFO] 10.244.0.18:44347 - 17143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094705s
	[INFO] 10.244.0.18:46410 - 18873 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005052499s
	[INFO] 10.244.0.18:46410 - 26037 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.007222025s
	[INFO] 10.244.0.18:34432 - 34096 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00440361s
	[INFO] 10.244.0.18:34432 - 33069 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006320363s
	[INFO] 10.244.0.18:48014 - 36175 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004973376s
	[INFO] 10.244.0.18:48014 - 51266 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006232888s
	[INFO] 10.244.0.18:55384 - 9190 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082322s
	[INFO] 10.244.0.18:55384 - 6628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000129501s
	[INFO] 10.244.0.20:48448 - 47225 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000223503s
	[INFO] 10.244.0.20:55693 - 31699 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271037s
	[INFO] 10.244.0.20:57762 - 4868 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147825s
	[INFO] 10.244.0.20:41977 - 42962 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138482s
	[INFO] 10.244.0.20:35780 - 25623 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090618s
	[INFO] 10.244.0.20:35231 - 28557 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160324s
	[INFO] 10.244.0.20:37823 - 1338 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005223073s
	[INFO] 10.244.0.20:35707 - 7420 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00534569s
	[INFO] 10.244.0.20:59126 - 24034 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005715821s
	[INFO] 10.244.0.20:41947 - 25595 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006074152s
	[INFO] 10.244.0.20:60551 - 48110 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004720674s
	[INFO] 10.244.0.20:47355 - 8992 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005126856s
	[INFO] 10.244.0.20:41941 - 3315 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002301451s
	[INFO] 10.244.0.20:35273 - 35195 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.002359301s
	
	
	==> describe nodes <==
	Name:               addons-162403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-162403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-162403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_19_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-162403
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:19:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-162403
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:31:19 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:31:19 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:31:19 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:31:19 +0000   Fri, 20 Sep 2024 18:20:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-162403
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 84fc0251f2cc47d9b8eafd449e71e23a
	  System UUID:                a1b78626-3ab2-4437-8dfa-b9488af04241
	  Boot ID:                    1090cbe7-7e52-40cc-b00d-227cb699fd1e
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-tvlhv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-742xn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-24mgs                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-addons-162403                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-j7fr4                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-addons-162403               250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-162403      200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-dd8cb                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-162403               100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-gr2ct            100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         13m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-86d989889c-v5k84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-162403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-162403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-162403 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node addons-162403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node addons-162403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node addons-162403 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node addons-162403 event: Registered Node addons-162403 in Controller
	  Normal   NodeReady                12m                kubelet          Node addons-162403 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 16 6d e1 19 46 08 06
	[  +6.907947] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 0c fb 31 c2 61 08 06
	[ +27.701132] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 51 e2 82 fa 23 08 06
	[  +0.958821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 b8 e7 f5 d7 b1 08 06
	[  +0.036400] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a e8 33 86 c0 c3 08 06
	[Sep20 18:07] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 77 f7 48 11 3e 08 06
	[Sep20 18:30] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +1.015314] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +2.011792] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +4.255527] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +8.195086] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[ +16.122214] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[Sep20 18:31] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	
	
	==> etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] <==
	{"level":"info","ts":"2024-09-20T18:19:21.249218Z","caller":"traceutil/trace.go:171","msg":"trace[1967390529] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"203.55059ms","start":"2024-09-20T18:19:21.045640Z","end":"2024-09-20T18:19:21.249190Z","steps":["trace[1967390529] 'process raft request'  (duration: 16.993754ms)","trace[1967390529] 'compare'  (duration: 90.373951ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:19:21.551948Z","caller":"traceutil/trace.go:171","msg":"trace[804211122] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"107.356574ms","start":"2024-09-20T18:19:21.444570Z","end":"2024-09-20T18:19:21.551927Z","steps":["trace[804211122] 'process raft request'  (duration: 100.501186ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.552185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.503897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-20T18:19:21.552216Z","caller":"traceutil/trace.go:171","msg":"trace[1547814754] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:394; }","duration":"105.546848ms","start":"2024-09-20T18:19:21.446660Z","end":"2024-09-20T18:19:21.552207Z","steps":["trace[1547814754] 'agreement among raft nodes before linearized reading'  (duration: 105.471396ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.552378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.065959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-20T18:19:21.554478Z","caller":"traceutil/trace.go:171","msg":"trace[813399487] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:394; }","duration":"110.162094ms","start":"2024-09-20T18:19:21.444302Z","end":"2024-09-20T18:19:21.554464Z","steps":["trace[813399487] 'agreement among raft nodes before linearized reading'  (duration: 108.03856ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.961265Z","caller":"traceutil/trace.go:171","msg":"trace[125269741] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:411; }","duration":"106.622713ms","start":"2024-09-20T18:19:21.854600Z","end":"2024-09-20T18:19:21.961223Z","steps":["trace[125269741] 'read index received'  (duration: 10.382865ms)","trace[125269741] 'applied index is now lower than readState.Index'  (duration: 96.239015ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:19:21.962257Z","caller":"traceutil/trace.go:171","msg":"trace[622144423] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"105.453568ms","start":"2024-09-20T18:19:21.856784Z","end":"2024-09-20T18:19:21.962238Z","steps":["trace[622144423] 'process raft request'  (duration: 104.328353ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.962484Z","caller":"traceutil/trace.go:171","msg":"trace[1676311620] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"107.945568ms","start":"2024-09-20T18:19:21.854521Z","end":"2024-09-20T18:19:21.962467Z","steps":["trace[1676311620] 'process raft request'  (duration: 106.387415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.963144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.476893ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:19:21.964893Z","caller":"traceutil/trace.go:171","msg":"trace[911468214] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"110.230982ms","start":"2024-09-20T18:19:21.854646Z","end":"2024-09-20T18:19:21.964877Z","steps":["trace[911468214] 'agreement among raft nodes before linearized reading'  (duration: 108.318206ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.963644Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.037025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-66scz\" ","response":"range_response_count:1 size:3993"}
	{"level":"info","ts":"2024-09-20T18:19:21.961930Z","caller":"traceutil/trace.go:171","msg":"trace[1847307825] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"107.308099ms","start":"2024-09-20T18:19:21.854610Z","end":"2024-09-20T18:19:21.961918Z","steps":["trace[1847307825] 'process raft request'  (duration: 106.453351ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.965508Z","caller":"traceutil/trace.go:171","msg":"trace[534112644] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-66scz; range_end:; response_count:1; response_revision:409; }","duration":"110.908607ms","start":"2024-09-20T18:19:21.854588Z","end":"2024-09-20T18:19:21.965497Z","steps":["trace[534112644] 'agreement among raft nodes before linearized reading'  (duration: 109.014234ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:22.259610Z","caller":"traceutil/trace.go:171","msg":"trace[515300591] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"197.066865ms","start":"2024-09-20T18:19:22.062522Z","end":"2024-09-20T18:19:22.259589Z","steps":["trace[515300591] 'process raft request'  (duration: 97.886637ms)","trace[515300591] 'compare'  (duration: 98.78174ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:19:22.259756Z","caller":"traceutil/trace.go:171","msg":"trace[414203013] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"196.971979ms","start":"2024-09-20T18:19:22.062775Z","end":"2024-09-20T18:19:22.259747Z","steps":["trace[414203013] 'read index received'  (duration: 84.675819ms)","trace[414203013] 'applied index is now lower than readState.Index'  (duration: 112.295168ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:19:22.259853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.062034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:19:22.259884Z","caller":"traceutil/trace.go:171","msg":"trace[1096776429] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:413; }","duration":"197.105208ms","start":"2024-09-20T18:19:22.062771Z","end":"2024-09-20T18:19:22.259876Z","steps":["trace[1096776429] 'agreement among raft nodes before linearized reading'  (duration: 197.01208ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:22.260069Z","caller":"traceutil/trace.go:171","msg":"trace[1236995037] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"108.527632ms","start":"2024-09-20T18:19:22.151533Z","end":"2024-09-20T18:19:22.260061Z","steps":["trace[1236995037] 'process raft request'  (duration: 107.765823ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:23.355227Z","caller":"traceutil/trace.go:171","msg":"trace[850183716] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"102.197243ms","start":"2024-09-20T18:19:23.253005Z","end":"2024-09-20T18:19:23.355202Z","steps":[],"step_count":0}
	{"level":"info","ts":"2024-09-20T18:19:23.355525Z","caller":"traceutil/trace.go:171","msg":"trace[673964805] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"102.739687ms","start":"2024-09-20T18:19:23.252776Z","end":"2024-09-20T18:19:23.355515Z","steps":[],"step_count":0}
	{"level":"info","ts":"2024-09-20T18:20:56.075302Z","caller":"traceutil/trace.go:171","msg":"trace[1129311574] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"107.719546ms","start":"2024-09-20T18:20:55.967566Z","end":"2024-09-20T18:20:56.075286Z","steps":["trace[1129311574] 'process raft request'  (duration: 107.623877ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:29:10.697461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1534}
	{"level":"info","ts":"2024-09-20T18:29:10.721647Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1534,"took":"23.706571ms","hash":2292866617,"current-db-size-bytes":6184960,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3268608,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-20T18:29:10.721703Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2292866617,"revision":1534,"compact-revision":-1}
	
	
	==> gcp-auth [167a7699d2ad79a24795ae8d77140ef7ac5625e2824cc3968217f95fcb44cb62] <==
	2024/09/20 18:21:45 Ready to write response ...
	2024/09/20 18:21:45 Ready to marshal response ...
	2024/09/20 18:21:45 Ready to write response ...
	2024/09/20 18:29:56 Ready to marshal response ...
	2024/09/20 18:29:56 Ready to write response ...
	2024/09/20 18:29:58 Ready to marshal response ...
	2024/09/20 18:29:58 Ready to write response ...
	2024/09/20 18:30:12 Ready to marshal response ...
	2024/09/20 18:30:12 Ready to write response ...
	2024/09/20 18:30:22 Ready to marshal response ...
	2024/09/20 18:30:22 Ready to write response ...
	2024/09/20 18:30:44 Ready to marshal response ...
	2024/09/20 18:30:44 Ready to write response ...
	2024/09/20 18:30:44 Ready to marshal response ...
	2024/09/20 18:30:44 Ready to write response ...
	2024/09/20 18:30:56 Ready to marshal response ...
	2024/09/20 18:30:56 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	2024/09/20 18:32:33 Ready to marshal response ...
	2024/09/20 18:32:33 Ready to write response ...
	
	
	==> kernel <==
	 18:32:43 up  2:15,  0 users,  load average: 0.38, 0.49, 0.92
	Linux addons-162403 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] <==
	I0920 18:30:43.647138       1 main.go:299] handling current node
	I0920 18:30:53.644203       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:30:53.644254       1 main.go:299] handling current node
	I0920 18:31:03.647053       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:31:03.647090       1 main.go:299] handling current node
	I0920 18:31:13.644658       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:31:13.644701       1 main.go:299] handling current node
	I0920 18:31:23.644288       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:31:23.644331       1 main.go:299] handling current node
	I0920 18:31:33.651069       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:31:33.651107       1 main.go:299] handling current node
	I0920 18:31:43.646532       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:31:43.646566       1 main.go:299] handling current node
	I0920 18:31:53.648355       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:31:53.648394       1 main.go:299] handling current node
	I0920 18:32:03.648119       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:32:03.648155       1 main.go:299] handling current node
	I0920 18:32:13.644219       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:32:13.644298       1 main.go:299] handling current node
	I0920 18:32:23.644972       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:32:23.645009       1 main.go:299] handling current node
	I0920 18:32:33.644608       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:32:33.644789       1 main.go:299] handling current node
	I0920 18:32:43.644653       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:32:43.644698       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 18:21:34.561794       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.35.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.35.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.35.12:443: connect: connection refused" logger="UnhandledError"
	I0920 18:21:34.599060       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 18:30:06.424489       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:30:07.441395       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:30:09.437271       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 18:30:11.895465       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:30:12.151933       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.87.74"}
	I0920 18:30:37.871661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.871723       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.887033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.887175       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.892360       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.892420       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.898321       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.898486       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.949118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.949160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 18:30:38.893058       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 18:30:38.950022       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 18:30:38.957732       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 18:30:57.308382       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.229.203"}
	I0920 18:32:33.948290       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.42.120"}
	
	
	==> kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] <==
	W0920 18:31:22.309425       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:31:22.309476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:31:46.833716       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:31:46.833766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:31:49.200895       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:31:49.200938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:31:56.198701       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:31:56.198745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:32:17.998255       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:32:17.998299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:32:27.486537       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:32:27.486584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:32:32.515205       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:32:32.515250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:32:33.705098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="20.732616ms"
	I0920 18:32:33.709587       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.330085ms"
	I0920 18:32:33.709689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="53.816µs"
	I0920 18:32:33.712652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="47.035µs"
	I0920 18:32:35.393536       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0920 18:32:35.396695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.677µs"
	I0920 18:32:35.398340       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0920 18:32:37.889743       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.673586ms"
	I0920 18:32:37.889816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.52µs"
	W0920 18:32:43.358596       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:32:43.358642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] <==
	I0920 18:19:23.348890       1 server_linux.go:66] "Using iptables proxy"
	I0920 18:19:24.053513       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 18:19:24.053686       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:19:24.461765       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 18:19:24.461911       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:19:24.544987       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:19:24.545696       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:19:24.545778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:19:24.547965       1 config.go:199] "Starting service config controller"
	I0920 18:19:24.549760       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:19:24.549176       1 config.go:328] "Starting node config controller"
	I0920 18:19:24.549328       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:19:24.549802       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:19:24.549809       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:19:24.651660       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:19:24.651660       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:19:24.651702       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] <==
	W0920 18:19:12.051578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0920 18:19:12.051599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:19:12.051631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:19:12.051630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.052648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:19:12.052678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.052829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:19:12.052843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.855801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:12.855855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.864661       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:19:12.864714       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:19:12.882432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:19:12.882477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.910952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:12.911024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.925403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:19:12.925449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.010499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:19:13.010542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.081617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:13.081680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.166464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:13.166510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 18:19:15.650022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:32:34 addons-162403 kubelet[1624]: I0920 18:32:34.859716    1624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmwr7\" (UniqueName: \"kubernetes.io/projected/254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5-kube-api-access-pmwr7\") pod \"254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5\" (UID: \"254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5\") "
	Sep 20 18:32:34 addons-162403 kubelet[1624]: I0920 18:32:34.861837    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5-kube-api-access-pmwr7" (OuterVolumeSpecName: "kube-api-access-pmwr7") pod "254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5" (UID: "254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5"). InnerVolumeSpecName "kube-api-access-pmwr7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:32:34 addons-162403 kubelet[1624]: I0920 18:32:34.862695    1624 scope.go:117] "RemoveContainer" containerID="bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0"
	Sep 20 18:32:34 addons-162403 kubelet[1624]: I0920 18:32:34.878617    1624 scope.go:117] "RemoveContainer" containerID="bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0"
	Sep 20 18:32:34 addons-162403 kubelet[1624]: E0920 18:32:34.879026    1624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0\": container with ID starting with bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0 not found: ID does not exist" containerID="bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0"
	Sep 20 18:32:34 addons-162403 kubelet[1624]: I0920 18:32:34.879062    1624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0"} err="failed to get container status \"bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0\": rpc error: code = NotFound desc = could not find container \"bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0\": container with ID starting with bffafc70e2bfb5c01df15a922ac34f62f437df7bb3515aad2af0a9325d2f93e0 not found: ID does not exist"
	Sep 20 18:32:34 addons-162403 kubelet[1624]: I0920 18:32:34.960457    1624 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pmwr7\" (UniqueName: \"kubernetes.io/projected/254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5-kube-api-access-pmwr7\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:32:36 addons-162403 kubelet[1624]: I0920 18:32:36.447693    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5" path="/var/lib/kubelet/pods/254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5/volumes"
	Sep 20 18:32:36 addons-162403 kubelet[1624]: I0920 18:32:36.448050    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a61a31f-6c24-48e8-8401-f9e162308c11" path="/var/lib/kubelet/pods/7a61a31f-6c24-48e8-8401-f9e162308c11/volumes"
	Sep 20 18:32:36 addons-162403 kubelet[1624]: I0920 18:32:36.448333    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d121d1-2cd7-49c8-92f7-2bc2239a9a2d" path="/var/lib/kubelet/pods/a1d121d1-2cd7-49c8-92f7-2bc2239a9a2d/volumes"
	Sep 20 18:32:37 addons-162403 kubelet[1624]: I0920 18:32:37.884461    1624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-tvlhv" podStartSLOduration=1.826010836 podStartE2EDuration="4.884437171s" podCreationTimestamp="2024-09-20 18:32:33 +0000 UTC" firstStartedPulling="2024-09-20 18:32:34.074572735 +0000 UTC m=+799.756950172" lastFinishedPulling="2024-09-20 18:32:37.132999068 +0000 UTC m=+802.815376507" observedRunningTime="2024-09-20 18:32:37.883965655 +0000 UTC m=+803.566343101" watchObservedRunningTime="2024-09-20 18:32:37.884437171 +0000 UTC m=+803.566814618"
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.783071    1624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnqr7\" (UniqueName: \"kubernetes.io/projected/682363dd-9574-4aa6-b0df-2d77ce4696a9-kube-api-access-vnqr7\") pod \"682363dd-9574-4aa6-b0df-2d77ce4696a9\" (UID: \"682363dd-9574-4aa6-b0df-2d77ce4696a9\") "
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.783134    1624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/682363dd-9574-4aa6-b0df-2d77ce4696a9-webhook-cert\") pod \"682363dd-9574-4aa6-b0df-2d77ce4696a9\" (UID: \"682363dd-9574-4aa6-b0df-2d77ce4696a9\") "
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.785004    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682363dd-9574-4aa6-b0df-2d77ce4696a9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "682363dd-9574-4aa6-b0df-2d77ce4696a9" (UID: "682363dd-9574-4aa6-b0df-2d77ce4696a9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.785002    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682363dd-9574-4aa6-b0df-2d77ce4696a9-kube-api-access-vnqr7" (OuterVolumeSpecName: "kube-api-access-vnqr7") pod "682363dd-9574-4aa6-b0df-2d77ce4696a9" (UID: "682363dd-9574-4aa6-b0df-2d77ce4696a9"). InnerVolumeSpecName "kube-api-access-vnqr7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.877339    1624 scope.go:117] "RemoveContainer" containerID="4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74"
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.883946    1624 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vnqr7\" (UniqueName: \"kubernetes.io/projected/682363dd-9574-4aa6-b0df-2d77ce4696a9-kube-api-access-vnqr7\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.883983    1624 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/682363dd-9574-4aa6-b0df-2d77ce4696a9-webhook-cert\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.892671    1624 scope.go:117] "RemoveContainer" containerID="4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74"
	Sep 20 18:32:38 addons-162403 kubelet[1624]: E0920 18:32:38.893077    1624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74\": container with ID starting with 4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74 not found: ID does not exist" containerID="4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74"
	Sep 20 18:32:38 addons-162403 kubelet[1624]: I0920 18:32:38.893122    1624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74"} err="failed to get container status \"4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74\": rpc error: code = NotFound desc = could not find container \"4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74\": container with ID starting with 4e295275fa2d8b4b52d8b9a6c39cfb76baa90c87c8d47ecde0b9278393e81b74 not found: ID does not exist"
	Sep 20 18:32:40 addons-162403 kubelet[1624]: I0920 18:32:40.447594    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682363dd-9574-4aa6-b0df-2d77ce4696a9" path="/var/lib/kubelet/pods/682363dd-9574-4aa6-b0df-2d77ce4696a9/volumes"
	Sep 20 18:32:40 addons-162403 kubelet[1624]: E0920 18:32:40.564901    1624 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 20 18:32:40 addons-162403 kubelet[1624]: E0920 18:32:40.565101    1624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:busybox,Image:gcr.io/k8s-minikube/busybox:1.28.4-glibc,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4hs2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name
:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod busybox_default(3994a86e-6df2-4cd1-b7ae-47433e7d9eef): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" logger="UnhandledError"
	Sep 20 18:32:40 addons-162403 kubelet[1624]: E0920 18:32:40.566329    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: authentication failed\"" pod="default/busybox" podUID="3994a86e-6df2-4cd1-b7ae-47433e7d9eef"
	
	
	==> storage-provisioner [525f045aa748e6ea6058a19f28604c5472b307505ab4e997fc5024dd5e9d9ef2] <==
	I0920 18:20:05.073668       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:20:05.085517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:20:05.085586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:20:05.094317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:20:05.094479       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06!
	I0920 18:20:05.094902       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a6c3edb-f643-4302-b044-b3279df05602", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06 became leader
	I0920 18:20:05.195504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-162403 -n addons-162403
helpers_test.go:261: (dbg) Run:  kubectl --context addons-162403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-162403 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-162403 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-162403/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 18:21:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4hs2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p4hs2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/busybox to addons-162403
	  Normal   Pulling    9m29s (x4 over 10m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m29s (x4 over 10m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m29s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m15s (x6 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    56s (x42 over 10m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (296.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.133153ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gr2ct" [aadc0160-94e3-4273-9d42-d0552af7ad61] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003496172s
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (63.598874ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 10m39.620900285s

                                                
                                                
** /stderr **
I0920 18:29:58.623005  672823 retry.go:31] will retry after 3.43094587s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (72.055992ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 10m43.124721186s

                                                
                                                
** /stderr **
I0920 18:30:02.126815  672823 retry.go:31] will retry after 6.26646181s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (64.816664ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 10m49.456262632s

                                                
                                                
** /stderr **
I0920 18:30:08.458651  672823 retry.go:31] will retry after 8.459686742s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (66.886272ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 10m57.984392284s

                                                
                                                
** /stderr **
I0920 18:30:16.986390  672823 retry.go:31] will retry after 15.14677427s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (83.490359ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 11m13.215057461s

                                                
                                                
** /stderr **
I0920 18:30:32.217196  672823 retry.go:31] will retry after 12.870055134s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (67.58459ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 11m26.152920487s

                                                
                                                
** /stderr **
I0920 18:30:45.155908  672823 retry.go:31] will retry after 21.271649955s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (64.8233ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 11m47.490472042s

                                                
                                                
** /stderr **
I0920 18:31:06.492681  672823 retry.go:31] will retry after 21.743093368s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (63.368457ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 12m9.299525615s

                                                
                                                
** /stderr **
I0920 18:31:28.302031  672823 retry.go:31] will retry after 56.7053887s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (62.384582ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 13m6.071970514s

                                                
                                                
** /stderr **
I0920 18:32:25.074015  672823 retry.go:31] will retry after 31.230973524s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (63.109739ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 13m37.367191289s

                                                
                                                
** /stderr **
I0920 18:32:56.369314  672823 retry.go:31] will retry after 1m19.184632359s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (64.582201ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 14m56.622848653s

                                                
                                                
** /stderr **
I0920 18:34:15.624962  672823 retry.go:31] will retry after 31.782453361s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-162403 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-162403 top pods -n kube-system: exit status 1 (64.331356ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-24mgs, age: 15m28.470025393s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-162403
helpers_test.go:235: (dbg) docker inspect addons-162403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7",
	        "Created": "2024-09-20T18:19:01.134918747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 674901,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T18:19:01.25004308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/hosts",
	        "LogPath": "/var/lib/docker/containers/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7-json.log",
	        "Name": "/addons-162403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-162403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-162403",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3-init/diff:/var/lib/docker/overlay2/eaa029c0352c09d5301213b292ed71be17ad3c7af9b304910b3afcbb6087e2a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46b0d8ce2eae79604ea4eb97b6b4e36eeb4ca9310c61f276a053866b944a16b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-162403",
	                "Source": "/var/lib/docker/volumes/addons-162403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-162403",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-162403",
	                "name.minikube.sigs.k8s.io": "addons-162403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "488a22f7f2606afe4be623bfdfd275b5b8331f1b931576ea9ec822158b58c0ce",
	            "SandboxKey": "/var/run/docker/netns/488a22f7f260",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-162403": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d0901782c3c8698a9caccb5c84dc1c7ad2c5eb6d0b068119a7aad73f3dbaa435",
	                    "EndpointID": "035274aa910e41c214a6f521c4fc53fb707a6152897b47a404b57c9e4e462cf6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-162403",
	                        "106a9fd3effc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-162403 -n addons-162403
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-162403 logs -n 25: (1.217402647s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-183655                                                                     | download-only-183655   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| start   | --download-only -p                                                                          | download-docker-729301 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | download-docker-729301                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-729301                                                                   | download-docker-729301 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-249385   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | binary-mirror-249385                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43551                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-249385                                                                     | binary-mirror-249385   | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| addons  | enable dashboard -p                                                                         | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-162403 --wait=true                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC | 20 Sep 24 18:29 UTC |
	|         | -p addons-162403                                                                            |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:29 UTC | 20 Sep 24 18:29 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-162403 ssh curl -s                                                                   | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-162403 addons                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-162403 addons                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | addons-162403                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-162403 ssh cat                                                                       | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | /opt/local-path-provisioner/pvc-7362d9da-c19d-46d1-ab52-e395c2ebef40_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | -p addons-162403                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-162403 ip                                                                            | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:30 UTC | 20 Sep 24 18:30 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:31 UTC | 20 Sep 24 18:31 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-162403 ip                                                                            | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:32 UTC | 20 Sep 24 18:32 UTC |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:32 UTC | 20 Sep 24 18:32 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-162403 addons disable                                                                | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:32 UTC | 20 Sep 24 18:32 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-162403 addons                                                                        | addons-162403          | jenkins | v1.34.0 | 20 Sep 24 18:34 UTC | 20 Sep 24 18:34 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:18:38
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:18:38.955255  674168 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:18:38.955393  674168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:38.955405  674168 out.go:358] Setting ErrFile to fd 2...
	I0920 18:18:38.955420  674168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:38.955592  674168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:18:38.956218  674168 out.go:352] Setting JSON to false
	I0920 18:18:38.957151  674168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7263,"bootTime":1726849056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:18:38.957258  674168 start.go:139] virtualization: kvm guest
	I0920 18:18:38.959268  674168 out.go:177] * [addons-162403] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:18:38.960748  674168 notify.go:220] Checking for updates...
	I0920 18:18:38.960767  674168 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:18:38.962055  674168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:18:38.963377  674168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:18:38.964538  674168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	I0920 18:18:38.965672  674168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:18:38.966885  674168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:18:38.968185  674168 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:18:38.989387  674168 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:18:38.989471  674168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:39.033969  674168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:18:39.025186058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:39.034102  674168 docker.go:318] overlay module found
	I0920 18:18:39.035798  674168 out.go:177] * Using the docker driver based on user configuration
	I0920 18:18:39.037025  674168 start.go:297] selected driver: docker
	I0920 18:18:39.037039  674168 start.go:901] validating driver "docker" against <nil>
	I0920 18:18:39.037051  674168 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:18:39.037947  674168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:39.085086  674168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:18:39.076841302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:39.085255  674168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:18:39.085496  674168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:18:39.087167  674168 out.go:177] * Using Docker driver with root privileges
	I0920 18:18:39.088532  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:18:39.088595  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:18:39.088606  674168 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:18:39.088665  674168 start.go:340] cluster config:
	{Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:39.089923  674168 out.go:177] * Starting "addons-162403" primary control-plane node in "addons-162403" cluster
	I0920 18:18:39.091072  674168 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:18:39.092598  674168 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:18:39.094070  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:39.094104  674168 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:18:39.094121  674168 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:18:39.094133  674168 cache.go:56] Caching tarball of preloaded images
	I0920 18:18:39.094252  674168 preload.go:172] Found /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 18:18:39.094263  674168 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:18:39.094613  674168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json ...
	I0920 18:18:39.094639  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json: {Name:mka678336c738f0ad3cca0a057f366143df6dca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:18:39.109272  674168 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:18:39.109425  674168 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:18:39.109447  674168 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:18:39.109453  674168 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:18:39.109467  674168 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:18:39.109477  674168 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 18:18:51.189040  674168 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 18:18:51.189079  674168 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:18:51.189135  674168 start.go:360] acquireMachinesLock for addons-162403: {Name:mk331c03eda7bf008a5f6618682622fc66137de8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:18:51.189234  674168 start.go:364] duration metric: took 78.073µs to acquireMachinesLock for "addons-162403"
	I0920 18:18:51.189258  674168 start.go:93] Provisioning new machine with config: &{Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:18:51.189337  674168 start.go:125] createHost starting for "" (driver="docker")
	I0920 18:18:51.191508  674168 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 18:18:51.191775  674168 start.go:159] libmachine.API.Create for "addons-162403" (driver="docker")
	I0920 18:18:51.191808  674168 client.go:168] LocalClient.Create starting
	I0920 18:18:51.191901  674168 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem
	I0920 18:18:51.507907  674168 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem
	I0920 18:18:51.677159  674168 cli_runner.go:164] Run: docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 18:18:51.691915  674168 cli_runner.go:211] docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 18:18:51.692010  674168 network_create.go:284] running [docker network inspect addons-162403] to gather additional debugging logs...
	I0920 18:18:51.692035  674168 cli_runner.go:164] Run: docker network inspect addons-162403
	W0920 18:18:51.707711  674168 cli_runner.go:211] docker network inspect addons-162403 returned with exit code 1
	I0920 18:18:51.707746  674168 network_create.go:287] error running [docker network inspect addons-162403]: docker network inspect addons-162403: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-162403 not found
	I0920 18:18:51.707769  674168 network_create.go:289] output of [docker network inspect addons-162403]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-162403 not found
	
	** /stderr **
	I0920 18:18:51.707870  674168 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:18:51.723682  674168 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a5f410}
	I0920 18:18:51.723727  674168 network_create.go:124] attempt to create docker network addons-162403 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 18:18:51.723786  674168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-162403 addons-162403
	I0920 18:18:51.787135  674168 network_create.go:108] docker network addons-162403 192.168.49.0/24 created
	I0920 18:18:51.787171  674168 kic.go:121] calculated static IP "192.168.49.2" for the "addons-162403" container
	I0920 18:18:51.787234  674168 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 18:18:51.802456  674168 cli_runner.go:164] Run: docker volume create addons-162403 --label name.minikube.sigs.k8s.io=addons-162403 --label created_by.minikube.sigs.k8s.io=true
	I0920 18:18:51.819456  674168 oci.go:103] Successfully created a docker volume addons-162403
	I0920 18:18:51.819546  674168 cli_runner.go:164] Run: docker run --rm --name addons-162403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --entrypoint /usr/bin/test -v addons-162403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 18:18:56.747820  674168 cli_runner.go:217] Completed: docker run --rm --name addons-162403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --entrypoint /usr/bin/test -v addons-162403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (4.92822817s)
	I0920 18:18:56.747853  674168 oci.go:107] Successfully prepared a docker volume addons-162403
	I0920 18:18:56.747870  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:56.747891  674168 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 18:18:56.747948  674168 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162403:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 18:19:01.072064  674168 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-162403:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.324069588s)
	I0920 18:19:01.072104  674168 kic.go:203] duration metric: took 4.324208181s to extract preloaded images to volume ...
	W0920 18:19:01.072245  674168 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 18:19:01.072342  674168 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 18:19:01.120121  674168 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-162403 --name addons-162403 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-162403 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-162403 --network addons-162403 --ip 192.168.49.2 --volume addons-162403:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 18:19:01.433919  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Running}}
	I0920 18:19:01.451773  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.468968  674168 cli_runner.go:164] Run: docker exec addons-162403 stat /var/lib/dpkg/alternatives/iptables
	I0920 18:19:01.510599  674168 oci.go:144] the created container "addons-162403" has a running status.
	I0920 18:19:01.510643  674168 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa...
	I0920 18:19:01.839171  674168 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 18:19:01.868842  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.888555  674168 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 18:19:01.888581  674168 kic_runner.go:114] Args: [docker exec --privileged addons-162403 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 18:19:01.951628  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:01.969485  674168 machine.go:93] provisionDockerMachine start ...
	I0920 18:19:01.969572  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:01.988650  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:01.988870  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:01.988884  674168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:19:02.122640  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162403
	
	I0920 18:19:02.122671  674168 ubuntu.go:169] provisioning hostname "addons-162403"
	I0920 18:19:02.122731  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.140337  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.140537  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.140557  674168 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-162403 && echo "addons-162403" | sudo tee /etc/hostname
	I0920 18:19:02.286561  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-162403
	
	I0920 18:19:02.286650  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.304306  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.304516  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.304533  674168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-162403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-162403/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-162403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:19:02.439353  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:19:02.439404  674168 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-664237/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-664237/.minikube}
	I0920 18:19:02.439441  674168 ubuntu.go:177] setting up certificates
	I0920 18:19:02.439455  674168 provision.go:84] configureAuth start
	I0920 18:19:02.439504  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:02.456858  674168 provision.go:143] copyHostCerts
	I0920 18:19:02.456941  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/ca.pem (1078 bytes)
	I0920 18:19:02.457067  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/cert.pem (1123 bytes)
	I0920 18:19:02.457128  674168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-664237/.minikube/key.pem (1679 bytes)
	I0920 18:19:02.457180  674168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem org=jenkins.addons-162403 san=[127.0.0.1 192.168.49.2 addons-162403 localhost minikube]
	I0920 18:19:02.568617  674168 provision.go:177] copyRemoteCerts
	I0920 18:19:02.568695  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:19:02.568736  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.586920  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:02.684045  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 18:19:02.707472  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:19:02.731956  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:19:02.755601  674168 provision.go:87] duration metric: took 316.131194ms to configureAuth
	I0920 18:19:02.755631  674168 ubuntu.go:193] setting minikube options for container-runtime
	I0920 18:19:02.755814  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:02.755914  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:02.772731  674168 main.go:141] libmachine: Using SSH client type: native
	I0920 18:19:02.772918  674168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:19:02.772936  674168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:19:02.992259  674168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:19:02.992298  674168 machine.go:96] duration metric: took 1.022790809s to provisionDockerMachine
	I0920 18:19:02.992310  674168 client.go:171] duration metric: took 11.800496863s to LocalClient.Create
	I0920 18:19:02.992331  674168 start.go:167] duration metric: took 11.800557763s to libmachine.API.Create "addons-162403"
	I0920 18:19:02.992341  674168 start.go:293] postStartSetup for "addons-162403" (driver="docker")
	I0920 18:19:02.992353  674168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:19:02.992454  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:19:02.992503  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.008771  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.104327  674168 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:19:03.107709  674168 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 18:19:03.107745  674168 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 18:19:03.107753  674168 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 18:19:03.107760  674168 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 18:19:03.107771  674168 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-664237/.minikube/addons for local assets ...
	I0920 18:19:03.107836  674168 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-664237/.minikube/files for local assets ...
	I0920 18:19:03.107861  674168 start.go:296] duration metric: took 115.514633ms for postStartSetup
	I0920 18:19:03.108152  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:03.124456  674168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/config.json ...
	I0920 18:19:03.124718  674168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:19:03.124760  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.141718  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.231925  674168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 18:19:03.236351  674168 start.go:128] duration metric: took 12.046994202s to createHost
	I0920 18:19:03.236388  674168 start.go:83] releasing machines lock for "addons-162403", held for 12.047138719s
	I0920 18:19:03.236447  674168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-162403
	I0920 18:19:03.252823  674168 ssh_runner.go:195] Run: cat /version.json
	I0920 18:19:03.252881  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.252896  674168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:19:03.252965  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:03.270590  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.270812  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:03.431267  674168 ssh_runner.go:195] Run: systemctl --version
	I0920 18:19:03.435427  674168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:19:03.571297  674168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:19:03.575824  674168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:19:03.593925  674168 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 18:19:03.594008  674168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:19:03.621210  674168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 18:19:03.621241  674168 start.go:495] detecting cgroup driver to use...
	I0920 18:19:03.621281  674168 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 18:19:03.621346  674168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:19:03.636176  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:19:03.646720  674168 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:19:03.646780  674168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:19:03.659269  674168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:19:03.672678  674168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:19:03.753551  674168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:19:03.832924  674168 docker.go:233] disabling docker service ...
	I0920 18:19:03.833033  674168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:19:03.850932  674168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:19:03.861851  674168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:19:03.936436  674168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:19:04.025605  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:19:04.037271  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:19:04.053234  674168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:19:04.053306  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.062992  674168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:19:04.063067  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.073077  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.082949  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.093166  674168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:19:04.102194  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.111782  674168 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.127237  674168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:19:04.137185  674168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:19:04.145365  674168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:19:04.153756  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:04.227978  674168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:19:04.324503  674168 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:19:04.324605  674168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:19:04.328475  674168 start.go:563] Will wait 60s for crictl version
	I0920 18:19:04.328524  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:19:04.331866  674168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:19:04.364842  674168 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 18:19:04.364939  674168 ssh_runner.go:195] Run: crio --version
	I0920 18:19:04.404023  674168 ssh_runner.go:195] Run: crio --version
	I0920 18:19:04.442587  674168 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 18:19:04.444061  674168 cli_runner.go:164] Run: docker network inspect addons-162403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:19:04.460165  674168 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 18:19:04.463995  674168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:19:04.474789  674168 kubeadm.go:883] updating cluster {Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:19:04.474919  674168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:19:04.474992  674168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:19:04.537318  674168 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:19:04.537404  674168 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:19:04.537459  674168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:19:04.571115  674168 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:19:04.571143  674168 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:19:04.571153  674168 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 18:19:04.571259  674168 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-162403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:19:04.571321  674168 ssh_runner.go:195] Run: crio config
	I0920 18:19:04.615201  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:19:04.615225  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:19:04.615237  674168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:19:04.615259  674168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-162403 NodeName:addons-162403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:19:04.615389  674168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-162403"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:19:04.615447  674168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:19:04.624504  674168 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:19:04.624568  674168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:19:04.633418  674168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 18:19:04.650496  674168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:19:04.667763  674168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 18:19:04.684808  674168 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 18:19:04.688259  674168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:19:04.698716  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:04.772157  674168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:19:04.785010  674168 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403 for IP: 192.168.49.2
	I0920 18:19:04.785034  674168 certs.go:194] generating shared ca certs ...
	I0920 18:19:04.785055  674168 certs.go:226] acquiring lock for ca certs: {Name:mk4b124302946da10a6534852cdb170d2c9fff4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.785184  674168 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key
	I0920 18:19:04.975314  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt ...
	I0920 18:19:04.975345  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt: {Name:mk70db283e13139496726ffe72d8d96dde32a822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.975559  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key ...
	I0920 18:19:04.975584  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key: {Name:mk35cfb4b8c77a9b5e50fcee25a6045ab52d6653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:04.975700  674168 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key
	I0920 18:19:05.060533  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt ...
	I0920 18:19:05.060567  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt: {Name:mk71caa95e512e49d5f0bbeb9669d49d06067538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.060774  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key ...
	I0920 18:19:05.060791  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key: {Name:mk48c17978eac1b6467fd589c3690dfaad357164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.060889  674168 certs.go:256] generating profile certs ...
	I0920 18:19:05.060964  674168 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key
	I0920 18:19:05.060984  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt with IP's: []
	I0920 18:19:05.132709  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt ...
	I0920 18:19:05.132744  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: {Name:mk43ea5dca75753d8d8a5367831467eeceb0fdc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.132939  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key ...
	I0920 18:19:05.132959  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.key: {Name:mk5d83dae2938d299506d1c5f284f55c2b17c66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.133062  674168 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af
	I0920 18:19:05.133090  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 18:19:05.307926  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af ...
	I0920 18:19:05.307962  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af: {Name:mkae84dcee0d54761655975153f0afe30c8c5174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.308152  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af ...
	I0920 18:19:05.308174  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af: {Name:mkf96ba0fb78917c3ee6f7335dc544ffcc5224ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.308277  674168 certs.go:381] copying /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt.bc66e7af -> /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt
	I0920 18:19:05.308379  674168 certs.go:385] copying /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key.bc66e7af -> /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key
	I0920 18:19:05.308461  674168 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key
	I0920 18:19:05.308486  674168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt with IP's: []
	I0920 18:19:05.434100  674168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt ...
	I0920 18:19:05.434142  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt: {Name:mk90e9baf01ada5513109eca2cf59bfe6b10cb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.434322  674168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key ...
	I0920 18:19:05.434336  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key: {Name:mk97b476f9ae1a8b6c97412a5ae795e7d133f43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:05.434511  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 18:19:05.434549  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/ca.pem (1078 bytes)
	I0920 18:19:05.434571  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:19:05.434592  674168 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-664237/.minikube/certs/key.pem (1679 bytes)
	I0920 18:19:05.435207  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:19:05.458404  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 18:19:05.481726  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:19:05.504545  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:19:05.526862  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:19:05.548944  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 18:19:05.571483  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:19:05.593408  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 18:19:05.615754  674168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-664237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:19:05.638295  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:19:05.654802  674168 ssh_runner.go:195] Run: openssl version
	I0920 18:19:05.660087  674168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:19:05.669718  674168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.673149  674168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:19 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.673209  674168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:19:05.679642  674168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:19:05.689469  674168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:19:05.692656  674168 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:19:05.692709  674168 kubeadm.go:392] StartCluster: {Name:addons-162403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-162403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:19:05.692807  674168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:19:05.692848  674168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:19:05.726380  674168 cri.go:89] found id: ""
	I0920 18:19:05.726441  674168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:19:05.734945  674168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:19:05.743371  674168 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 18:19:05.743434  674168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:19:05.751458  674168 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:19:05.751486  674168 kubeadm.go:157] found existing configuration files:
	
	I0920 18:19:05.751533  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:19:05.759587  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:19:05.759665  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:19:05.767587  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:19:05.775580  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:19:05.775632  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:19:05.783550  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:19:05.791364  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:19:05.791431  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:19:05.799115  674168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:19:05.806872  674168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:19:05.806937  674168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:19:05.814767  674168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 18:19:05.849981  674168 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:19:05.850038  674168 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:19:05.866359  674168 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 18:19:05.866451  674168 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0920 18:19:05.866546  674168 kubeadm.go:310] OS: Linux
	I0920 18:19:05.866606  674168 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 18:19:05.866650  674168 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 18:19:05.866698  674168 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 18:19:05.866761  674168 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 18:19:05.866832  674168 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 18:19:05.866901  674168 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 18:19:05.866960  674168 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 18:19:05.867073  674168 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 18:19:05.867141  674168 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 18:19:05.916092  674168 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:19:05.916231  674168 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:19:05.916371  674168 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:19:05.923502  674168 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:19:05.926743  674168 out.go:235]   - Generating certificates and keys ...
	I0920 18:19:05.926857  674168 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:19:05.926930  674168 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:19:06.037108  674168 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:19:06.230359  674168 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:19:06.324616  674168 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:19:06.546085  674168 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:19:06.884456  674168 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:19:06.884577  674168 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-162403 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:19:07.307543  674168 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:19:07.307735  674168 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-162403 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:19:07.569020  674168 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:19:07.702458  674168 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:19:07.850614  674168 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:19:07.850743  674168 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:19:07.903971  674168 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:19:08.053888  674168 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:19:08.422419  674168 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:19:08.545791  674168 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:19:08.627541  674168 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:19:08.627956  674168 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:19:08.631231  674168 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:19:08.633449  674168 out.go:235]   - Booting up control plane ...
	I0920 18:19:08.633578  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:19:08.633681  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:19:08.633775  674168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:19:08.645378  674168 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:19:08.650587  674168 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:19:08.650659  674168 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:19:08.727967  674168 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:19:08.728106  674168 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:19:09.229492  674168 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.337636ms
	I0920 18:19:09.229658  674168 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:19:13.730791  674168 kubeadm.go:310] [api-check] The API server is healthy after 4.501479968s
	I0920 18:19:13.742809  674168 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:19:13.755431  674168 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:19:13.774442  674168 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:19:13.774707  674168 kubeadm.go:310] [mark-control-plane] Marking the node addons-162403 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:19:13.782319  674168 kubeadm.go:310] [bootstrap-token] Using token: dfp0rr.g8klnxfszt90e7ou
	I0920 18:19:13.783826  674168 out.go:235]   - Configuring RBAC rules ...
	I0920 18:19:13.783941  674168 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:19:13.787166  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:19:13.793657  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:19:13.797189  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:19:13.799957  674168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:19:13.802629  674168 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:19:14.139197  674168 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:19:14.568490  674168 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:19:15.136897  674168 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:19:15.137714  674168 kubeadm.go:310] 
	I0920 18:19:15.137780  674168 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:19:15.137788  674168 kubeadm.go:310] 
	I0920 18:19:15.137863  674168 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:19:15.137873  674168 kubeadm.go:310] 
	I0920 18:19:15.137906  674168 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:19:15.138010  674168 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:19:15.138117  674168 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:19:15.138134  674168 kubeadm.go:310] 
	I0920 18:19:15.138208  674168 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:19:15.138217  674168 kubeadm.go:310] 
	I0920 18:19:15.138283  674168 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:19:15.138292  674168 kubeadm.go:310] 
	I0920 18:19:15.138391  674168 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:19:15.138525  674168 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:19:15.138624  674168 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:19:15.138640  674168 kubeadm.go:310] 
	I0920 18:19:15.138736  674168 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:19:15.138857  674168 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:19:15.138879  674168 kubeadm.go:310] 
	I0920 18:19:15.139024  674168 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dfp0rr.g8klnxfszt90e7ou \
	I0920 18:19:15.139190  674168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:891ba1fd40a1e235f359f18998838e7bbc84a16cf5d5bbb3fe5b65a2c5d30bae \
	I0920 18:19:15.139223  674168 kubeadm.go:310] 	--control-plane 
	I0920 18:19:15.139231  674168 kubeadm.go:310] 
	I0920 18:19:15.139332  674168 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:19:15.139342  674168 kubeadm.go:310] 
	I0920 18:19:15.139453  674168 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dfp0rr.g8klnxfszt90e7ou \
	I0920 18:19:15.139569  674168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:891ba1fd40a1e235f359f18998838e7bbc84a16cf5d5bbb3fe5b65a2c5d30bae 
	I0920 18:19:15.141419  674168 kubeadm.go:310] W0920 18:19:05.847423    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:19:15.141788  674168 kubeadm.go:310] W0920 18:19:05.848046    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:19:15.141998  674168 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0920 18:19:15.142142  674168 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:19:15.142176  674168 cni.go:84] Creating CNI manager for ""
	I0920 18:19:15.142184  674168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:19:15.144217  674168 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:19:15.145705  674168 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:19:15.149559  674168 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:19:15.149575  674168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:19:15.167148  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:19:15.359568  674168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:19:15.359642  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:15.359669  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-162403 minikube.k8s.io/updated_at=2024_09_20T18_19_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-162403 minikube.k8s.io/primary=true
	I0920 18:19:15.367240  674168 ops.go:34] apiserver oom_adj: -16
	I0920 18:19:15.462349  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:15.963384  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:16.462821  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:16.962540  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:17.463154  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:17.962489  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:18.463105  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:18.962640  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:19.463445  674168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:19:19.546496  674168 kubeadm.go:1113] duration metric: took 4.186919442s to wait for elevateKubeSystemPrivileges
	I0920 18:19:19.546589  674168 kubeadm.go:394] duration metric: took 13.853885644s to StartCluster
	I0920 18:19:19.546618  674168 settings.go:142] acquiring lock: {Name:mk3858ba4d2318954bc9bdba2ebdd7d07c1af964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:19.546761  674168 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:19:19.547278  674168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-664237/kubeconfig: {Name:mk211a7242c57e0384e62621e3b0b410c7b81ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:19:19.547568  674168 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:19:19.547588  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:19:19.547603  674168 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:19:19.547727  674168 addons.go:69] Setting cloud-spanner=true in profile "addons-162403"
	I0920 18:19:19.547739  674168 addons.go:69] Setting yakd=true in profile "addons-162403"
	I0920 18:19:19.547755  674168 addons.go:234] Setting addon cloud-spanner=true in "addons-162403"
	I0920 18:19:19.547765  674168 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-162403"
	I0920 18:19:19.547780  674168 addons.go:69] Setting metrics-server=true in profile "addons-162403"
	I0920 18:19:19.547793  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547804  674168 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-162403"
	I0920 18:19:19.547813  674168 addons.go:234] Setting addon metrics-server=true in "addons-162403"
	I0920 18:19:19.547819  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:19.547838  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547843  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547881  674168 addons.go:69] Setting storage-provisioner=true in profile "addons-162403"
	I0920 18:19:19.547898  674168 addons.go:234] Setting addon storage-provisioner=true in "addons-162403"
	I0920 18:19:19.547923  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.548013  674168 addons.go:69] Setting ingress=true in profile "addons-162403"
	I0920 18:19:19.548033  674168 addons.go:234] Setting addon ingress=true in "addons-162403"
	I0920 18:19:19.548078  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.548348  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548368  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548372  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548394  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548471  674168 addons.go:69] Setting default-storageclass=true in profile "addons-162403"
	I0920 18:19:19.548500  674168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-162403"
	I0920 18:19:19.548533  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.548792  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.549033  674168 addons.go:69] Setting registry=true in profile "addons-162403"
	I0920 18:19:19.549061  674168 addons.go:234] Setting addon registry=true in "addons-162403"
	I0920 18:19:19.549095  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.547756  674168 addons.go:234] Setting addon yakd=true in "addons-162403"
	I0920 18:19:19.549524  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.549550  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.549933  674168 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-162403"
	I0920 18:19:19.549957  674168 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-162403"
	I0920 18:19:19.550006  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550197  674168 addons.go:69] Setting ingress-dns=true in profile "addons-162403"
	I0920 18:19:19.550213  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550225  674168 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-162403"
	I0920 18:19:19.550238  674168 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-162403"
	I0920 18:19:19.550263  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.551201  674168 addons.go:69] Setting gcp-auth=true in profile "addons-162403"
	I0920 18:19:19.554213  674168 addons.go:69] Setting inspektor-gadget=true in profile "addons-162403"
	I0920 18:19:19.554281  674168 addons.go:69] Setting volcano=true in profile "addons-162403"
	I0920 18:19:19.554302  674168 addons.go:234] Setting addon volcano=true in "addons-162403"
	I0920 18:19:19.551386  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.550214  674168 addons.go:234] Setting addon ingress-dns=true in "addons-162403"
	I0920 18:19:19.554827  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.554304  674168 addons.go:234] Setting addon inspektor-gadget=true in "addons-162403"
	I0920 18:19:19.555122  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.555478  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.555674  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.554221  674168 mustload.go:65] Loading cluster: addons-162403
	I0920 18:19:19.554183  674168 out.go:177] * Verifying Kubernetes components...
	I0920 18:19:19.556337  674168 config.go:182] Loaded profile config "addons-162403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:19:19.556799  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.554271  674168 addons.go:69] Setting volumesnapshots=true in profile "addons-162403"
	I0920 18:19:19.557261  674168 addons.go:234] Setting addon volumesnapshots=true in "addons-162403"
	I0920 18:19:19.557308  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.559052  674168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:19:19.569182  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.588210  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.588739  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.588904  674168 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:19:19.588992  674168 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:19:19.590309  674168 addons.go:234] Setting addon default-storageclass=true in "addons-162403"
	I0920 18:19:19.590370  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.590786  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:19:19.590802  674168 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:19:19.590864  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.590961  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:19.591935  674168 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:19:19.593751  674168 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:19:19.593775  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:19:19.593828  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.601351  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:19:19.601355  674168 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 18:19:19.601442  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:19.603687  674168 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:19:19.603717  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:19:19.603786  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.604025  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:19.608296  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:19:19.609371  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:19:19.610117  674168 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:19:19.610142  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:19:19.610211  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.612872  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:19:19.614205  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:19:19.615649  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:19:19.616930  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:19:19.618228  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:19:19.618357  674168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:19:19.619747  674168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:19:19.619771  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:19:19.619845  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.620114  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:19:19.624754  674168 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:19:19.624710  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:19:19.624879  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:19:19.624952  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.628419  674168 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:19:19.628839  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:19:19.628880  674168 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:19:19.628974  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.629898  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:19:19.629920  674168 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:19:19.629986  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.635925  674168 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:19:19.635951  674168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:19:19.636128  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.638673  674168 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:19:19.638818  674168 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:19:19.641476  674168 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:19:19.641507  674168 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:19:19.641586  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.641902  674168 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:19:19.641918  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:19:19.641968  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.644063  674168 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:19:19.647042  674168 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:19:19.647066  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:19:19.647131  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.651090  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.672918  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.673246  674168 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-162403"
	I0920 18:19:19.673285  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:19.673746  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	W0920 18:19:19.674079  674168 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:19:19.680928  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.692356  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.699068  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.703084  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.708959  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.709724  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.710034  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.710800  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.716097  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.718252  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.725687  674168 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:19:19.727095  674168 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:19:19.728444  674168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:19:19.728469  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:19:19.728535  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:19.728936  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.756378  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:19.851667  674168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:19:19.851869  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:19:19.958165  674168 node_ready.go:35] waiting up to 6m0s for node "addons-162403" to be "Ready" ...
	I0920 18:19:20.049122  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:19:20.059225  674168 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:19:20.059328  674168 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:19:20.143656  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:19:20.143697  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:19:20.162533  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:19:20.248915  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:19:20.252979  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:19:20.253073  674168 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:19:20.253373  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:19:20.255477  674168 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:19:20.255545  674168 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:19:20.344657  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:19:20.344752  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:19:20.344997  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:19:20.347913  674168 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:19:20.347984  674168 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:19:20.361494  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:19:20.361598  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:19:20.443778  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:19:20.460111  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:19:20.460213  674168 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:19:20.466113  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:19:20.556027  674168 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:19:20.556125  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:19:20.562330  674168 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:19:20.562372  674168 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:19:20.644614  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:19:20.644712  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:19:20.645083  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:19:20.645155  674168 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:19:20.743572  674168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:19:20.743665  674168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:19:20.843761  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:19:20.863489  674168 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:19:20.863586  674168 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:19:20.866991  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:19:20.867029  674168 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:19:20.957725  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:19:20.957824  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:19:21.051014  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:19:21.051107  674168 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:19:21.146711  674168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:21.146794  674168 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:19:21.244660  674168 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:19:21.244769  674168 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:19:21.345912  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:19:21.345949  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:19:21.353497  674168 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:21.353530  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:19:21.443980  674168 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.592066127s)
	I0920 18:19:21.444142  674168 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 18:19:21.446954  674168 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:19:21.447049  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:19:21.451328  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:19:21.556343  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:19:21.567862  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:21.643571  674168 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:19:21.643834  674168 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:19:21.857128  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:19:21.857204  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:19:21.970271  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:22.055373  674168 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-162403" context rescaled to 1 replicas
	I0920 18:19:22.254875  674168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:19:22.255007  674168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:19:22.351603  674168 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:19:22.351644  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:19:22.745266  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:19:22.745357  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:19:22.950177  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:19:22.950262  674168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:19:22.950772  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:19:22.958386  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.909152791s)
	I0920 18:19:23.143977  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:19:23.144014  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:19:23.344840  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:19:23.344947  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:19:23.463128  674168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:19:23.463229  674168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:19:23.654193  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:19:23.862111  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.699531854s)
	I0920 18:19:24.153748  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:25.659918  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.410895641s)
	I0920 18:19:25.659961  674168 addons.go:475] Verifying addon ingress=true in "addons-162403"
	I0920 18:19:25.659999  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.406542284s)
	I0920 18:19:25.660093  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.315030192s)
	I0920 18:19:25.660129  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.216231279s)
	I0920 18:19:25.660205  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.193997113s)
	I0920 18:19:25.660276  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.816413561s)
	I0920 18:19:25.660308  674168 addons.go:475] Verifying addon registry=true in "addons-162403"
	I0920 18:19:25.660382  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.208965825s)
	I0920 18:19:25.660442  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.104058168s)
	I0920 18:19:25.660445  674168 addons.go:475] Verifying addon metrics-server=true in "addons-162403"
	I0920 18:19:25.661699  674168 out.go:177] * Verifying registry addon...
	I0920 18:19:25.661755  674168 out.go:177] * Verifying ingress addon...
	I0920 18:19:25.661868  674168 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-162403 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:19:25.663738  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:19:25.664391  674168 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0920 18:19:25.668639  674168 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:19:25.668854  674168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:19:25.668871  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:25.768664  674168 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:19:25.768694  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.168189  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:26.168647  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.244777  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.676860398s)
	W0920 18:19:26.244890  674168 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:19:26.244939  674168 retry.go:31] will retry after 349.249211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:19:26.244988  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.294091803s)
	I0920 18:19:26.461459  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:26.574707  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.92045562s)
	I0920 18:19:26.574757  674168 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-162403"
	I0920 18:19:26.577367  674168 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:19:26.579563  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:19:26.582943  674168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:19:26.582960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:26.594681  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:19:26.683334  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:26.683674  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:26.858359  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:19:26.858435  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:26.875902  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:26.984458  674168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:19:27.001107  674168 addons.go:234] Setting addon gcp-auth=true in "addons-162403"
	I0920 18:19:27.001163  674168 host.go:66] Checking if "addons-162403" exists ...
	I0920 18:19:27.001520  674168 cli_runner.go:164] Run: docker container inspect addons-162403 --format={{.State.Status}}
	I0920 18:19:27.018107  674168 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:19:27.018153  674168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-162403
	I0920 18:19:27.035342  674168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/addons-162403/id_rsa Username:docker}
	I0920 18:19:27.083631  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:27.166744  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:27.168128  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:27.646290  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:27.669072  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:27.669418  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.084361  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:28.166640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.168138  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:28.462238  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:28.583099  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:28.667640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:28.667978  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:29.084266  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:29.167817  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:29.168604  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:29.271367  674168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.676631111s)
	I0920 18:19:29.271432  674168 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.253291372s)
	I0920 18:19:29.273273  674168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:19:29.274673  674168 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:19:29.276361  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:19:29.276382  674168 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:19:29.294783  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:19:29.294816  674168 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:19:29.345482  674168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:19:29.345506  674168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:19:29.363625  674168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:19:29.583445  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:29.667504  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:29.668067  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.065330  674168 addons.go:475] Verifying addon gcp-auth=true in "addons-162403"
	I0920 18:19:30.067623  674168 out.go:177] * Verifying gcp-auth addon...
	I0920 18:19:30.070321  674168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:19:30.073240  674168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:19:30.073265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:30.083449  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:30.167256  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:30.168040  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.574216  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:30.583194  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:30.667733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:30.668045  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:30.961659  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:31.073149  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:31.082855  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:31.168115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:31.168666  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:31.573991  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:31.582620  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:31.667824  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:31.668352  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:32.073266  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:32.082897  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:32.167779  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:32.168380  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:32.574170  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:32.582879  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:32.667250  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:32.667809  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:33.074390  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:33.083130  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:33.168052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:33.168329  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:33.461572  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:33.574511  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:33.582999  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:33.667656  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:33.668054  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:34.073228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:34.082952  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:34.168374  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:34.169326  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:34.573898  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:34.583235  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:34.666598  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:34.667851  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.074529  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:35.083233  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:35.166658  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:35.167884  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.573980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:35.582504  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:35.667399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:35.667855  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:35.960967  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:36.073874  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:36.083242  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:36.166883  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:36.168404  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:36.574240  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:36.582733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:36.667467  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:36.667953  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.073902  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:37.082616  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:37.167641  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:37.167921  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.573766  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:37.583480  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:37.666947  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:37.667458  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:37.961890  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:38.073945  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:38.082640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:38.167284  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:38.167840  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:38.574639  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:38.583506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:38.667337  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:38.667789  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:39.073649  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:39.084058  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:39.167781  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:39.168107  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:39.574163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:39.583050  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:39.666763  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:39.668155  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:40.073200  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:40.082825  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:40.167592  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:40.168195  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:40.461680  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:40.573622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:40.583124  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:40.666705  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:40.667590  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:41.073798  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:41.083878  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:41.167259  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:41.167696  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:41.573769  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:41.583407  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:41.667187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:41.667621  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.073956  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:42.082469  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:42.167268  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:42.167773  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.573883  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:42.582802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:42.667181  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:42.667648  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:42.960976  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:43.073526  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:43.083195  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:43.167541  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:43.168076  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:43.574500  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:43.583094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:43.667526  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:43.667955  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.073938  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:44.082232  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:44.167119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:44.168254  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.573757  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:44.583299  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:44.666525  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:44.668092  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:44.961566  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:45.074296  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:45.083265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:45.166731  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:45.167803  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:45.573582  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:45.583070  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:45.666718  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:45.667763  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:46.074393  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:46.083026  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:46.167896  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:46.168469  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:46.573951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:46.582611  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:46.667417  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:46.667835  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:47.074391  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:47.083342  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:47.167582  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:47.168016  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:47.461559  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:47.573674  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:47.583550  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:47.667101  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:47.668093  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:48.074385  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:48.083357  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:48.166820  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:48.168052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:48.574056  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:48.583138  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:48.667700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:48.668170  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.073954  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:49.082550  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:49.167253  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:49.167689  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.573924  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:49.582493  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:49.667268  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:49.667713  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:49.961127  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:50.074222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:50.082751  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:50.167446  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:50.167837  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:50.573975  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:50.582446  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:50.667144  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:50.667725  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.073776  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:51.083555  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:51.167603  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:51.168082  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.573207  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:51.582872  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:51.667933  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:51.668639  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:51.961792  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:52.073650  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:52.083774  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:52.167240  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:52.167803  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:52.574175  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:52.583088  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:52.667593  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:52.668073  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:53.074115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:53.082843  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:53.167552  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:53.168250  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:53.574203  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:53.583096  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:53.666775  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:53.668043  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:54.073577  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:54.083165  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:54.166822  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:54.168120  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:54.461639  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:54.573485  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:54.583094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:54.667881  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:54.668272  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:55.074459  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:55.083676  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:55.167036  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:55.168063  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:55.574347  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:55.583185  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:55.666614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:55.668023  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.074436  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:56.083017  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:56.167739  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:56.168067  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.574141  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:56.582595  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:56.667193  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:56.667702  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:56.961306  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:57.073951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:57.082426  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:57.167036  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:57.167619  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:57.574066  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:57.582553  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:57.667363  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:57.667862  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:58.074286  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:58.083053  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:58.168080  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:58.168562  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:58.574033  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:58.582834  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:58.667744  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:58.667977  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:59.074041  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:59.084503  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:59.167532  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:59.167866  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:19:59.461351  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:19:59.574055  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:19:59.582662  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:19:59.667606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:19:59.668345  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:00.074001  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:00.082537  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:00.167389  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:00.167781  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:00.573646  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:00.583513  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:00.667237  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:00.667751  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.074614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:01.083606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:01.167425  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:01.167849  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.574159  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:01.582763  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:01.667525  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:01.667967  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:01.961782  674168 node_ready.go:53] node "addons-162403" has status "Ready":"False"
	I0920 18:20:02.073687  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:02.083273  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:02.167793  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:02.168126  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:02.573951  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:02.582489  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:02.667286  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:02.667673  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.074061  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:03.083043  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:03.167741  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:03.168186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.574298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:03.583319  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:03.667171  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:03.667926  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:03.963598  674168 node_ready.go:49] node "addons-162403" has status "Ready":"True"
	I0920 18:20:03.963697  674168 node_ready.go:38] duration metric: took 44.005491387s for node "addons-162403" to be "Ready" ...
	I0920 18:20:03.963739  674168 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:20:03.975991  674168 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:04.073640  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:04.083934  674168 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:20:04.083964  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:04.166878  674168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:20:04.166911  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:04.168046  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:04.574414  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:04.584293  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:04.668383  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:04.668692  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.077146  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:05.176605  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:05.176677  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.176971  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:05.574207  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:05.583569  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:05.668257  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:05.668609  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:05.982730  674168 pod_ready.go:93] pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.982753  674168 pod_ready.go:82] duration metric: took 2.006720801s for pod "coredns-7c65d6cfc9-24mgs" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.982772  674168 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.987525  674168 pod_ready.go:93] pod "etcd-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.987550  674168 pod_ready.go:82] duration metric: took 4.771792ms for pod "etcd-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.987564  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.992095  674168 pod_ready.go:93] pod "kube-apiserver-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.992119  674168 pod_ready.go:82] duration metric: took 4.547516ms for pod "kube-apiserver-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.992133  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.996705  674168 pod_ready.go:93] pod "kube-controller-manager-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:05.996728  674168 pod_ready.go:82] duration metric: took 4.58678ms for pod "kube-controller-manager-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:05.996742  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dd8cb" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.001096  674168 pod_ready.go:93] pod "kube-proxy-dd8cb" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:06.001119  674168 pod_ready.go:82] duration metric: took 4.367688ms for pod "kube-proxy-dd8cb" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.001128  674168 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.074611  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:06.084485  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:06.167894  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:06.168247  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:06.380446  674168 pod_ready.go:93] pod "kube-scheduler-addons-162403" in "kube-system" namespace has status "Ready":"True"
	I0920 18:20:06.380470  674168 pod_ready.go:82] duration metric: took 379.335122ms for pod "kube-scheduler-addons-162403" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.380483  674168 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace to be "Ready" ...
	I0920 18:20:06.573654  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:06.583209  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:06.669465  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:06.669865  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:07.074546  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:07.146700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:07.168630  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:07.168936  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:07.574572  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:07.646002  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:07.668560  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:07.669087  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:08.074484  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:08.147135  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:08.168492  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:08.169815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:08.387061  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:08.573949  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:08.583549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:08.668848  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:08.669952  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:09.075164  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:09.085141  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:09.168450  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:09.168903  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:09.573956  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:09.584733  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:09.668231  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:09.668811  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.074046  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:10.084317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:10.167605  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:10.168539  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.573990  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:10.584073  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:10.668505  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:10.668657  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:10.886466  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:11.074057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:11.083511  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:11.168156  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:11.168499  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:11.574454  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:11.584057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:11.667749  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:11.668163  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:12.074025  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:12.083478  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:12.167917  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:12.168149  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:12.573943  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:12.583638  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:12.667916  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:12.668188  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.074028  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:13.084332  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:13.167761  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:13.168109  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.385693  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:13.574062  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:13.675513  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:13.675988  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:13.676028  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.074341  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:14.083682  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.167388  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:14.168157  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:14.574641  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:14.584170  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:14.667163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:14.668186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:15.074157  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:15.083952  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:15.167738  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:15.168230  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:15.386551  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:15.573791  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:15.583941  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:15.667622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:15.667966  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:16.074020  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:16.083830  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:16.167948  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:16.168175  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:16.574271  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:16.583559  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:16.668115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:16.668332  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.074273  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:17.083969  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:17.167218  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:17.168238  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.574490  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:17.584137  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:17.667428  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:17.667780  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:17.886239  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:18.074428  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:18.084227  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:18.167720  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:18.168760  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:18.574681  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:18.583878  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:18.667539  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:18.668689  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.074506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:19.085322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:19.167619  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:19.168781  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.574399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:19.584366  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:19.668321  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:19.669055  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:19.886419  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:20.074661  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:20.084728  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:20.170023  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:20.170213  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:20.574364  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:20.583499  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:20.667708  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:20.668118  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:21.074066  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:21.085062  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:21.167396  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:21.167749  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:21.573957  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:21.583844  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:21.675451  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:21.675661  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:22.073998  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:22.083732  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:22.169529  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:22.170522  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:22.386803  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:22.573870  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:22.584705  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:22.667943  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:22.668186  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:23.074421  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:23.175976  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:23.176483  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:23.176697  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:23.575070  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:23.584072  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:23.667372  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:23.668676  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.074257  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:24.083644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:24.168228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:24.168815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.574187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:24.583351  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:24.667456  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:24.668620  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:24.886478  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:25.073866  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:25.084524  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:25.168018  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:25.168513  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:25.574841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:25.584539  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:25.667916  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:25.668455  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:26.074005  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:26.084351  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:26.167815  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:26.168130  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:26.573373  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:26.583700  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:26.667912  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:26.668223  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:27.075963  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:27.084215  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:27.167448  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:27.168236  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:27.385536  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:27.574802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:27.584026  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:27.667459  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:27.667865  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:28.074427  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:28.083549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:28.168099  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:28.168307  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:28.573283  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:28.583651  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:28.669993  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:28.670558  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:29.074299  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:29.083891  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:29.167292  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:29.168790  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:29.386904  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:29.574248  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:29.584292  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:29.667547  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:29.668470  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:30.073583  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:30.084840  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:30.168291  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:30.168832  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:30.573644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:30.583792  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:30.667979  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:30.668523  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.073798  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:31.088101  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:31.167412  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:31.168798  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.574592  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:31.584104  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:31.676242  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:31.676685  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:31.886012  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:32.074267  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:32.083949  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:32.167984  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:32.168035  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:32.573758  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:32.584399  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:32.667787  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:32.668680  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.073761  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:33.084622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:33.168481  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:33.169015  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.574492  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:33.584349  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:33.668163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:33.668466  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:33.886108  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:34.074298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:34.090815  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:34.168228  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:34.168607  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:34.574304  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:34.583500  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:34.667921  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:34.668346  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:35.074222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:35.083544  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:35.168115  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:35.168346  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:35.574453  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:35.583475  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:35.668056  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:35.668420  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:36.074656  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:36.084839  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:36.175775  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:36.176052  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:36.385161  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:36.573863  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:36.583168  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:36.667584  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:36.667932  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:37.074532  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:37.084050  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:37.167729  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:37.168857  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:37.575013  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:37.584903  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:37.667711  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:37.670115  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.148918  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:38.150092  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:38.170322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:38.171681  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.449562  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:38.647846  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:38.650638  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:38.671119  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:38.671851  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.073841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:39.084303  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:39.168201  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:39.168689  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.574832  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:39.584265  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:39.668057  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:39.668652  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.075222  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:40.084398  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:40.169659  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:40.169875  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.573922  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:40.585047  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:40.667391  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:40.668328  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:40.885859  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:41.074071  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:41.084506  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:41.167576  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:41.168542  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:41.574344  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:41.584143  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:41.667456  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:41.669612  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:42.074595  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:42.086313  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:42.167749  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:42.168802  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:42.574390  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:42.584540  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:42.668039  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:42.668168  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:43.074796  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:43.084081  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:43.175684  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:43.176316  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:43.387608  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:43.574180  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:43.583921  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:43.668317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:43.668557  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:44.074438  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:44.083995  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:44.175579  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:44.175990  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:44.574794  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:44.584211  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:44.667783  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:44.668012  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.075097  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:45.083848  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:45.167219  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:45.168396  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.574035  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:45.583614  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:45.667959  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:45.668489  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:45.886260  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:46.074149  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:46.084051  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:46.168119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:46.168348  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:46.574489  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:46.583340  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:46.667980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:46.668074  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.073991  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:47.084011  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:47.167606  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:47.167975  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.574409  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:47.584322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:47.667960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:47.668234  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:47.887147  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:48.074367  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:48.083559  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:48.168314  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:48.168688  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:48.574112  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:48.583378  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:48.667783  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:48.668071  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.074306  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:49.084220  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:49.167938  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:49.168189  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.574906  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:49.583879  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:49.667488  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:49.667893  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:49.887236  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:50.073693  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:50.084184  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:50.167541  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:50.168046  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:50.573701  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:50.584183  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:50.667813  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:50.668089  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.074194  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:51.083534  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:51.168108  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:51.168510  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.574767  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:51.584409  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:51.667685  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:51.668584  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:51.887461  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:52.074272  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:52.084298  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:52.167622  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:52.168343  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:52.574802  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:52.585518  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:52.667629  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:52.668294  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:53.074044  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:53.085119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:53.167794  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:53.167902  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:53.574468  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:53.584721  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:53.668152  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:53.668429  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:54.074187  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:54.083549  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:54.167885  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:54.168463  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:54.386319  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:54.574862  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:54.584077  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:54.667752  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:20:54.668059  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:55.074806  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:55.083967  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:55.167246  674168 kapi.go:107] duration metric: took 1m29.503507069s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:20:55.168254  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:55.573690  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:55.584989  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:55.669563  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:56.159319  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:56.159900  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:56.244905  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:56.449078  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:56.574644  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:56.584810  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:56.668815  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:57.151274  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:57.151865  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:57.245823  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:57.648547  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:57.650051  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:57.747751  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.147934  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:58.148674  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:58.170132  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.573817  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:58.585119  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:58.668821  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:58.886841  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:20:59.074016  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:59.083075  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:59.169176  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:20:59.573960  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:20:59.586741  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:20:59.669373  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:00.074322  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:00.084055  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:00.168452  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:00.573877  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:00.584075  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:00.669220  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:01.074453  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:01.084094  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:01.169161  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:01.386983  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:01.574518  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:01.584431  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:01.668575  674168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:21:02.074725  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:02.084554  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:02.169021  674168 kapi.go:107] duration metric: took 1m36.504626828s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:21:02.573607  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:02.584400  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:03.074502  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:03.084128  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:03.387306  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:03.574624  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:03.583947  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:04.074010  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:04.085435  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:04.574841  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:21:04.584904  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:05.074160  674168 kapi.go:107] duration metric: took 1m35.003835312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:21:05.076015  674168 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-162403 cluster.
	I0920 18:21:05.077316  674168 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:21:05.078763  674168 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:21:05.085221  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:05.387394  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:05.584431  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:06.084576  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:06.646888  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.085163  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.584837  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:07.887115  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:08.146524  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:08.584317  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:09.083918  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:09.584467  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:10.083578  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:10.386767  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:10.585465  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:11.084980  674168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:21:11.585791  674168 kapi.go:107] duration metric: took 1m45.006228088s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:21:11.587570  674168 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0920 18:21:11.588892  674168 addons.go:510] duration metric: took 1m52.041283386s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0920 18:21:12.886529  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:14.886947  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:17.386798  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:19.886426  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:22.387024  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:24.886306  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:26.886543  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:28.887497  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:31.386454  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:33.886042  674168 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"False"
	I0920 18:21:34.886898  674168 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace has status "Ready":"True"
	I0920 18:21:34.886922  674168 pod_ready.go:82] duration metric: took 1m28.50643262s for pod "metrics-server-84c5f94fbc-gr2ct" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.886933  674168 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.891249  674168 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace has status "Ready":"True"
	I0920 18:21:34.891272  674168 pod_ready.go:82] duration metric: took 4.331899ms for pod "nvidia-device-plugin-daemonset-vkrvk" in "kube-system" namespace to be "Ready" ...
	I0920 18:21:34.891290  674168 pod_ready.go:39] duration metric: took 1m30.927531806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:21:34.891322  674168 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:21:34.891383  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:34.891454  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:34.925385  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:34.925415  674168 cri.go:89] found id: ""
	I0920 18:21:34.925427  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:34.925481  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:34.928881  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:34.928961  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:34.961773  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:34.961796  674168 cri.go:89] found id: ""
	I0920 18:21:34.961806  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:34.961860  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:34.965452  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:34.965512  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:34.997902  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:34.997922  674168 cri.go:89] found id: ""
	I0920 18:21:34.997930  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:34.997971  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.001467  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:35.001538  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:35.033709  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:35.033737  674168 cri.go:89] found id: ""
	I0920 18:21:35.033747  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:35.033796  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.037117  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:35.037188  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:35.070146  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:35.070171  674168 cri.go:89] found id: ""
	I0920 18:21:35.070180  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:35.070232  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.073666  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:35.073742  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:35.106480  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:35.106505  674168 cri.go:89] found id: ""
	I0920 18:21:35.106515  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:35.106579  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.109930  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:35.110001  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:35.143353  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:35.143373  674168 cri.go:89] found id: ""
	I0920 18:21:35.143382  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:35.143450  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:35.147158  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:35.147183  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:35.186573  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:35.186608  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:35.219833  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:35.219859  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:35.296767  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:35.296802  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:35.374733  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:35.374783  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:35.397401  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:35.397441  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:35.439718  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:35.439747  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:35.481086  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:35.481119  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:35.515899  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:35.515944  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:35.614907  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:35.614941  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:35.669956  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:35.669994  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:35.705242  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:35.705275  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:38.247127  674168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:21:38.261085  674168 api_server.go:72] duration metric: took 2m18.713476022s to wait for apiserver process to appear ...
	I0920 18:21:38.261112  674168 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:21:38.261153  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:38.261198  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:38.294652  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:38.294675  674168 cri.go:89] found id: ""
	I0920 18:21:38.294683  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:38.294728  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.297926  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:38.298005  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:38.330857  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:38.330877  674168 cri.go:89] found id: ""
	I0920 18:21:38.330887  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:38.330948  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.334140  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:38.334194  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:38.367218  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:38.367245  674168 cri.go:89] found id: ""
	I0920 18:21:38.367252  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:38.367293  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.370531  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:38.370590  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:38.403339  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:38.403370  674168 cri.go:89] found id: ""
	I0920 18:21:38.403378  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:38.403433  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.406801  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:38.406872  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:38.439882  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:38.439903  674168 cri.go:89] found id: ""
	I0920 18:21:38.439912  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:38.439969  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.443320  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:38.443402  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:38.476678  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:38.476703  674168 cri.go:89] found id: ""
	I0920 18:21:38.476712  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:38.476769  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.479997  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:38.480061  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:38.515213  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:38.515238  674168 cri.go:89] found id: ""
	I0920 18:21:38.515246  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:38.515302  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:38.518573  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:38.518593  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:38.574209  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:38.574251  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:38.652350  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:38.652388  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:38.674362  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:38.674398  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:38.718009  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:38.718043  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:38.759722  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:38.759754  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:38.796446  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:38.796475  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:38.840305  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:38.840344  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:38.940656  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:38.940691  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:38.974579  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:38.974605  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:39.009360  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:39.009388  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:39.081734  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:39.081781  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:41.622849  674168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 18:21:41.627422  674168 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 18:21:41.628424  674168 api_server.go:141] control plane version: v1.31.1
	I0920 18:21:41.628450  674168 api_server.go:131] duration metric: took 3.367330033s to wait for apiserver health ...
	I0920 18:21:41.628460  674168 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:21:41.628488  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:21:41.628545  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:21:41.661458  674168 cri.go:89] found id: "f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:41.661477  674168 cri.go:89] found id: ""
	I0920 18:21:41.661485  674168 logs.go:276] 1 containers: [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5]
	I0920 18:21:41.661531  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.664866  674168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:21:41.664947  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:21:41.699349  674168 cri.go:89] found id: "c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:41.699374  674168 cri.go:89] found id: ""
	I0920 18:21:41.699391  674168 logs.go:276] 1 containers: [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6]
	I0920 18:21:41.699448  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.702834  674168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:21:41.702894  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:21:41.736614  674168 cri.go:89] found id: "cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:41.736638  674168 cri.go:89] found id: ""
	I0920 18:21:41.736648  674168 logs.go:276] 1 containers: [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629]
	I0920 18:21:41.736696  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.740481  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:21:41.740540  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:21:41.775612  674168 cri.go:89] found id: "249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:41.775636  674168 cri.go:89] found id: ""
	I0920 18:21:41.775644  674168 logs.go:276] 1 containers: [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb]
	I0920 18:21:41.775692  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.779048  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:21:41.779108  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:21:41.811224  674168 cri.go:89] found id: "52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:41.811253  674168 cri.go:89] found id: ""
	I0920 18:21:41.811261  674168 logs.go:276] 1 containers: [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6]
	I0920 18:21:41.811313  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.814683  674168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:21:41.814756  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:21:41.847730  674168 cri.go:89] found id: "4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:41.847751  674168 cri.go:89] found id: ""
	I0920 18:21:41.847761  674168 logs.go:276] 1 containers: [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284]
	I0920 18:21:41.847811  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.851164  674168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:21:41.851221  674168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:21:41.885935  674168 cri.go:89] found id: "0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:41.885956  674168 cri.go:89] found id: ""
	I0920 18:21:41.885964  674168 logs.go:276] 1 containers: [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d]
	I0920 18:21:41.886013  674168 ssh_runner.go:195] Run: which crictl
	I0920 18:21:41.889575  674168 logs.go:123] Gathering logs for coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] ...
	I0920 18:21:41.889598  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629"
	I0920 18:21:41.924023  674168 logs.go:123] Gathering logs for kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] ...
	I0920 18:21:41.924054  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6"
	I0920 18:21:41.957638  674168 logs.go:123] Gathering logs for kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] ...
	I0920 18:21:41.957665  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284"
	I0920 18:21:42.013803  674168 logs.go:123] Gathering logs for kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] ...
	I0920 18:21:42.013840  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d"
	I0920 18:21:42.052343  674168 logs.go:123] Gathering logs for kubelet ...
	I0920 18:21:42.052375  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:21:42.135981  674168 logs.go:123] Gathering logs for dmesg ...
	I0920 18:21:42.136020  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:21:42.164238  674168 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:21:42.164272  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:21:42.365506  674168 logs.go:123] Gathering logs for kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] ...
	I0920 18:21:42.365547  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5"
	I0920 18:21:42.460595  674168 logs.go:123] Gathering logs for etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] ...
	I0920 18:21:42.460631  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6"
	I0920 18:21:42.502829  674168 logs.go:123] Gathering logs for kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] ...
	I0920 18:21:42.502868  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb"
	I0920 18:21:42.557032  674168 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:21:42.557069  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:21:42.629398  674168 logs.go:123] Gathering logs for container status ...
	I0920 18:21:42.629442  674168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:21:45.182962  674168 system_pods.go:59] 18 kube-system pods found
	I0920 18:21:45.183040  674168 system_pods.go:61] "coredns-7c65d6cfc9-24mgs" [ec3e74ab-0ca2-4944-a0ba-ab3e2e552a1f] Running
	I0920 18:21:45.183051  674168 system_pods.go:61] "csi-hostpath-attacher-0" [057910a4-ea07-40ab-9129-a3c79903a5f9] Running
	I0920 18:21:45.183057  674168 system_pods.go:61] "csi-hostpath-resizer-0" [2a6e070f-8f67-46ad-8e2e-e738b9224362] Running
	I0920 18:21:45.183062  674168 system_pods.go:61] "csi-hostpathplugin-hgq4x" [d78d2043-38be-4774-a4e1-8f366b694e3f] Running
	I0920 18:21:45.183069  674168 system_pods.go:61] "etcd-addons-162403" [cd967cd6-498a-436c-8ebf-10e541085240] Running
	I0920 18:21:45.183078  674168 system_pods.go:61] "kindnet-j7fr4" [300d7753-4ee6-44db-818d-fdb1f602488b] Running
	I0920 18:21:45.183085  674168 system_pods.go:61] "kube-apiserver-addons-162403" [057055c6-3f96-4763-b006-b61092360aef] Running
	I0920 18:21:45.183094  674168 system_pods.go:61] "kube-controller-manager-addons-162403" [84fb95f0-0529-4bd3-8dd5-457189ef56cc] Running
	I0920 18:21:45.183101  674168 system_pods.go:61] "kube-ingress-dns-minikube" [254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5] Running
	I0920 18:21:45.183110  674168 system_pods.go:61] "kube-proxy-dd8cb" [3cac319c-9057-4e29-ae2c-fb7870227b4b] Running
	I0920 18:21:45.183116  674168 system_pods.go:61] "kube-scheduler-addons-162403" [aa393b8a-49e5-4aba-bb03-3843d62ed2d2] Running
	I0920 18:21:45.183122  674168 system_pods.go:61] "metrics-server-84c5f94fbc-gr2ct" [aadc0160-94e3-4273-9d42-d0552af7ad61] Running
	I0920 18:21:45.183129  674168 system_pods.go:61] "nvidia-device-plugin-daemonset-vkrvk" [e7dcaefe-b427-4947-b9f7-651ee1b219f8] Running
	I0920 18:21:45.183137  674168 system_pods.go:61] "registry-66c9cd494c-b4j85" [88d02c55-38b5-4e2b-9986-5f7887226e63] Running
	I0920 18:21:45.183144  674168 system_pods.go:61] "registry-proxy-x8xl5" [22fc174a-6a59-45df-b8e0-fd97f697901c] Running
	I0920 18:21:45.183152  674168 system_pods.go:61] "snapshot-controller-56fcc65765-pdqqq" [a8386b62-336b-4071-af36-a2737b7f6933] Running
	I0920 18:21:45.183158  674168 system_pods.go:61] "snapshot-controller-56fcc65765-qx6cd" [369755b7-0a45-437e-93c3-c52c7bc63bfd] Running
	I0920 18:21:45.183165  674168 system_pods.go:61] "storage-provisioner" [f20bb24d-0c61-4464-93b8-2f32abbe2465] Running
	I0920 18:21:45.183175  674168 system_pods.go:74] duration metric: took 3.554706193s to wait for pod list to return data ...
	I0920 18:21:45.183191  674168 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:21:45.185616  674168 default_sa.go:45] found service account: "default"
	I0920 18:21:45.185637  674168 default_sa.go:55] duration metric: took 2.436616ms for default service account to be created ...
	I0920 18:21:45.185645  674168 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:21:45.193659  674168 system_pods.go:86] 18 kube-system pods found
	I0920 18:21:45.193684  674168 system_pods.go:89] "coredns-7c65d6cfc9-24mgs" [ec3e74ab-0ca2-4944-a0ba-ab3e2e552a1f] Running
	I0920 18:21:45.193693  674168 system_pods.go:89] "csi-hostpath-attacher-0" [057910a4-ea07-40ab-9129-a3c79903a5f9] Running
	I0920 18:21:45.193697  674168 system_pods.go:89] "csi-hostpath-resizer-0" [2a6e070f-8f67-46ad-8e2e-e738b9224362] Running
	I0920 18:21:45.193700  674168 system_pods.go:89] "csi-hostpathplugin-hgq4x" [d78d2043-38be-4774-a4e1-8f366b694e3f] Running
	I0920 18:21:45.193704  674168 system_pods.go:89] "etcd-addons-162403" [cd967cd6-498a-436c-8ebf-10e541085240] Running
	I0920 18:21:45.193708  674168 system_pods.go:89] "kindnet-j7fr4" [300d7753-4ee6-44db-818d-fdb1f602488b] Running
	I0920 18:21:45.193712  674168 system_pods.go:89] "kube-apiserver-addons-162403" [057055c6-3f96-4763-b006-b61092360aef] Running
	I0920 18:21:45.193715  674168 system_pods.go:89] "kube-controller-manager-addons-162403" [84fb95f0-0529-4bd3-8dd5-457189ef56cc] Running
	I0920 18:21:45.193719  674168 system_pods.go:89] "kube-ingress-dns-minikube" [254c4909-b4eb-4c2a-9eaa-90c7d14bd7e5] Running
	I0920 18:21:45.193723  674168 system_pods.go:89] "kube-proxy-dd8cb" [3cac319c-9057-4e29-ae2c-fb7870227b4b] Running
	I0920 18:21:45.193726  674168 system_pods.go:89] "kube-scheduler-addons-162403" [aa393b8a-49e5-4aba-bb03-3843d62ed2d2] Running
	I0920 18:21:45.193730  674168 system_pods.go:89] "metrics-server-84c5f94fbc-gr2ct" [aadc0160-94e3-4273-9d42-d0552af7ad61] Running
	I0920 18:21:45.193733  674168 system_pods.go:89] "nvidia-device-plugin-daemonset-vkrvk" [e7dcaefe-b427-4947-b9f7-651ee1b219f8] Running
	I0920 18:21:45.193737  674168 system_pods.go:89] "registry-66c9cd494c-b4j85" [88d02c55-38b5-4e2b-9986-5f7887226e63] Running
	I0920 18:21:45.193741  674168 system_pods.go:89] "registry-proxy-x8xl5" [22fc174a-6a59-45df-b8e0-fd97f697901c] Running
	I0920 18:21:45.193744  674168 system_pods.go:89] "snapshot-controller-56fcc65765-pdqqq" [a8386b62-336b-4071-af36-a2737b7f6933] Running
	I0920 18:21:45.193749  674168 system_pods.go:89] "snapshot-controller-56fcc65765-qx6cd" [369755b7-0a45-437e-93c3-c52c7bc63bfd] Running
	I0920 18:21:45.193755  674168 system_pods.go:89] "storage-provisioner" [f20bb24d-0c61-4464-93b8-2f32abbe2465] Running
	I0920 18:21:45.193761  674168 system_pods.go:126] duration metric: took 8.110899ms to wait for k8s-apps to be running ...
	I0920 18:21:45.193769  674168 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:21:45.193838  674168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:21:45.204913  674168 system_svc.go:56] duration metric: took 11.134209ms WaitForService to wait for kubelet
	I0920 18:21:45.204952  674168 kubeadm.go:582] duration metric: took 2m25.657338244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:21:45.204980  674168 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:21:45.208110  674168 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 18:21:45.208138  674168 node_conditions.go:123] node cpu capacity is 8
	I0920 18:21:45.208151  674168 node_conditions.go:105] duration metric: took 3.164779ms to run NodePressure ...
	I0920 18:21:45.208162  674168 start.go:241] waiting for startup goroutines ...
	I0920 18:21:45.208172  674168 start.go:246] waiting for cluster config update ...
	I0920 18:21:45.208187  674168 start.go:255] writing updated cluster config ...
	I0920 18:21:45.208459  674168 ssh_runner.go:195] Run: rm -f paused
	I0920 18:21:45.256980  674168 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:21:45.259386  674168 out.go:177] * Done! kubectl is now configured to use "addons-162403" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.661251938Z" level=info msg="Removing pod sandbox: fe30db564565588d29fe729b2c97843db92d783ddec45aad76974cae6c4f1e21" id=d57e2a34-f107-4464-8de2-4797fdf098ff name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.667527173Z" level=info msg="Removed pod sandbox: fe30db564565588d29fe729b2c97843db92d783ddec45aad76974cae6c4f1e21" id=d57e2a34-f107-4464-8de2-4797fdf098ff name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.667830548Z" level=info msg="Stopping pod sandbox: 677f6adbe7c567af34cff7f9acadd07a72c08073d6009357db29069c8504024d" id=16e8aa7b-8fe0-480f-b8bb-5c2a90124c46 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.667865315Z" level=info msg="Stopped pod sandbox (already stopped): 677f6adbe7c567af34cff7f9acadd07a72c08073d6009357db29069c8504024d" id=16e8aa7b-8fe0-480f-b8bb-5c2a90124c46 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.668120352Z" level=info msg="Removing pod sandbox: 677f6adbe7c567af34cff7f9acadd07a72c08073d6009357db29069c8504024d" id=0e880a5c-b227-40ae-a6ce-cbd6d3acaff1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.674721944Z" level=info msg="Removed pod sandbox: 677f6adbe7c567af34cff7f9acadd07a72c08073d6009357db29069c8504024d" id=0e880a5c-b227-40ae-a6ce-cbd6d3acaff1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.675107295Z" level=info msg="Stopping pod sandbox: 13a96ce846052d641e65c4abf35ce356a267f32e039a8d92eff0c1eebf378854" id=dac248db-92d7-4a8a-bf93-749ebf4a0fa1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.675144615Z" level=info msg="Stopped pod sandbox (already stopped): 13a96ce846052d641e65c4abf35ce356a267f32e039a8d92eff0c1eebf378854" id=dac248db-92d7-4a8a-bf93-749ebf4a0fa1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.675450377Z" level=info msg="Removing pod sandbox: 13a96ce846052d641e65c4abf35ce356a267f32e039a8d92eff0c1eebf378854" id=9bc2400a-1f6c-48ae-b4aa-6f8e9aa7bf17 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 18:33:14 addons-162403 crio[1027]: time="2024-09-20 18:33:14.680780993Z" level=info msg="Removed pod sandbox: 13a96ce846052d641e65c4abf35ce356a267f32e039a8d92eff0c1eebf378854" id=9bc2400a-1f6c-48ae-b4aa-6f8e9aa7bf17 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 18:33:19 addons-162403 crio[1027]: time="2024-09-20 18:33:19.446529746Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f813c7dc-1f8b-4cfb-8412-44479a2dfe92 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:33:19 addons-162403 crio[1027]: time="2024-09-20 18:33:19.446911777Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f813c7dc-1f8b-4cfb-8412-44479a2dfe92 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:33:32 addons-162403 crio[1027]: time="2024-09-20 18:33:32.446557222Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=13ad03f0-a94c-4d8e-9a8b-be26d63bb9da name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:33:32 addons-162403 crio[1027]: time="2024-09-20 18:33:32.446863737Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=13ad03f0-a94c-4d8e-9a8b-be26d63bb9da name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:33:44 addons-162403 crio[1027]: time="2024-09-20 18:33:44.446419207Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fb583e86-1f5b-4096-8739-00303ad9f454 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:33:44 addons-162403 crio[1027]: time="2024-09-20 18:33:44.446699480Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=fb583e86-1f5b-4096-8739-00303ad9f454 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:33:57 addons-162403 crio[1027]: time="2024-09-20 18:33:57.446716531Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ed2c5db8-9e85-44fd-98f0-1b7845026e7c name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:33:57 addons-162403 crio[1027]: time="2024-09-20 18:33:57.446924985Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ed2c5db8-9e85-44fd-98f0-1b7845026e7c name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:34:09 addons-162403 crio[1027]: time="2024-09-20 18:34:09.446424546Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9d8653ad-06a5-4470-963f-054debfcb556 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:34:09 addons-162403 crio[1027]: time="2024-09-20 18:34:09.446685778Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9d8653ad-06a5-4470-963f-054debfcb556 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:34:22 addons-162403 crio[1027]: time="2024-09-20 18:34:22.446467790Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3bf9ee8f-8ef2-42a4-bd63-2a7becff59b2 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:34:22 addons-162403 crio[1027]: time="2024-09-20 18:34:22.446763936Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3bf9ee8f-8ef2-42a4-bd63-2a7becff59b2 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:34:35 addons-162403 crio[1027]: time="2024-09-20 18:34:35.446058785Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7d6d9ebb-5a6f-454d-8832-544b7b9901fa name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:34:35 addons-162403 crio[1027]: time="2024-09-20 18:34:35.446293226Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7d6d9ebb-5a6f-454d-8832-544b7b9901fa name=/runtime.v1.ImageService/ImageStatus
	Sep 20 18:34:47 addons-162403 crio[1027]: time="2024-09-20 18:34:47.911732751Z" level=info msg="Stopping container: acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720 (timeout: 30s)" id=450dd214-a919-49e5-8cdc-43923baf0bd5 name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	83a739692c73d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   b71637a3e61ec       hello-world-app-55bf9c44b4-tvlhv
	c25d2267ccfdd       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago       Running             nginx                     0                   6293a840e0f65       nginx
	167a7699d2ad7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago      Running             gcp-auth                  0                   cff5d95699f6e       gcp-auth-89d5ffd79-742xn
	5d495cc4d007d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        14 minutes ago      Running             local-path-provisioner    0                   2a8ec889be7b5       local-path-provisioner-86d989889c-v5k84
	acca616b5cd64       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   6a8890c1b1e3b       metrics-server-84c5f94fbc-gr2ct
	cdb59912f2e14       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago      Running             coredns                   0                   10529a41c309c       coredns-7c65d6cfc9-24mgs
	525f045aa748e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago      Running             storage-provisioner       0                   612ce81908c78       storage-provisioner
	0a3bc23a91121       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                        15 minutes ago      Running             kindnet-cni               0                   7f6e1d53fda98       kindnet-j7fr4
	52c52923ef8ea       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   ae303bad1ebff       kube-proxy-dd8cb
	4b71192f65f2d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   2a41178034cbc       kube-controller-manager-addons-162403
	249ac20417667       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   e48f7866753bd       kube-scheduler-addons-162403
	c4ad43014a83b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   bdef69edf9acd       etcd-addons-162403
	f38c04f167d00       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   c3f039afa24e9       kube-apiserver-addons-162403
	
	
	==> coredns [cdb59912f2e1480afebe02cdad2def98d0fc5c98298f195576dd3307877d9629] <==
	[INFO] 10.244.0.18:52396 - 19852 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128047s
	[INFO] 10.244.0.18:44347 - 60145 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068286s
	[INFO] 10.244.0.18:44347 - 17143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094705s
	[INFO] 10.244.0.18:46410 - 18873 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.005052499s
	[INFO] 10.244.0.18:46410 - 26037 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.007222025s
	[INFO] 10.244.0.18:34432 - 34096 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00440361s
	[INFO] 10.244.0.18:34432 - 33069 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006320363s
	[INFO] 10.244.0.18:48014 - 36175 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004973376s
	[INFO] 10.244.0.18:48014 - 51266 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006232888s
	[INFO] 10.244.0.18:55384 - 9190 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082322s
	[INFO] 10.244.0.18:55384 - 6628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000129501s
	[INFO] 10.244.0.20:48448 - 47225 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000223503s
	[INFO] 10.244.0.20:55693 - 31699 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271037s
	[INFO] 10.244.0.20:57762 - 4868 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147825s
	[INFO] 10.244.0.20:41977 - 42962 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138482s
	[INFO] 10.244.0.20:35780 - 25623 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090618s
	[INFO] 10.244.0.20:35231 - 28557 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160324s
	[INFO] 10.244.0.20:37823 - 1338 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005223073s
	[INFO] 10.244.0.20:35707 - 7420 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00534569s
	[INFO] 10.244.0.20:59126 - 24034 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005715821s
	[INFO] 10.244.0.20:41947 - 25595 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006074152s
	[INFO] 10.244.0.20:60551 - 48110 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004720674s
	[INFO] 10.244.0.20:47355 - 8992 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005126856s
	[INFO] 10.244.0.20:41941 - 3315 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002301451s
	[INFO] 10.244.0.20:35273 - 35195 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.002359301s
	
	
	==> describe nodes <==
	Name:               addons-162403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-162403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-162403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_19_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-162403
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:19:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-162403
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:34:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:32:51 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:32:51 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:32:51 +0000   Fri, 20 Sep 2024 18:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:32:51 +0000   Fri, 20 Sep 2024 18:20:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-162403
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 84fc0251f2cc47d9b8eafd449e71e23a
	  System UUID:                a1b78626-3ab2-4437-8dfa-b9488af04241
	  Boot ID:                    1090cbe7-7e52-40cc-b00d-227cb699fd1e
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-tvlhv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  gcp-auth                    gcp-auth-89d5ffd79-742xn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-24mgs                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 etcd-addons-162403                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-j7fr4                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-162403               250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-162403      200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-dd8cb                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-162403               100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-gr2ct            100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         15m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          local-path-provisioner-86d989889c-v5k84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-162403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-162403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-162403 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node addons-162403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node addons-162403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                kubelet          Node addons-162403 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node addons-162403 event: Registered Node addons-162403 in Controller
	  Normal   NodeReady                14m                kubelet          Node addons-162403 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 16 6d e1 19 46 08 06
	[  +6.907947] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 0c fb 31 c2 61 08 06
	[ +27.701132] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 51 e2 82 fa 23 08 06
	[  +0.958821] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 b8 e7 f5 d7 b1 08 06
	[  +0.036400] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a e8 33 86 c0 c3 08 06
	[Sep20 18:07] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 77 f7 48 11 3e 08 06
	[Sep20 18:30] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +1.015314] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +2.011792] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +4.255527] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[  +8.195086] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[ +16.122214] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	[Sep20 18:31] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 4a 85 8e a6 75 f5 b2 32 09 fe 04 5a 08 00
	
	
	==> etcd [c4ad43014a83bd695658fea9ff9e19cea193dfe2a46883a2dbe0413829b803b6] <==
	{"level":"info","ts":"2024-09-20T18:19:21.552216Z","caller":"traceutil/trace.go:171","msg":"trace[1547814754] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:394; }","duration":"105.546848ms","start":"2024-09-20T18:19:21.446660Z","end":"2024-09-20T18:19:21.552207Z","steps":["trace[1547814754] 'agreement among raft nodes before linearized reading'  (duration: 105.471396ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.552378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.065959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-20T18:19:21.554478Z","caller":"traceutil/trace.go:171","msg":"trace[813399487] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:394; }","duration":"110.162094ms","start":"2024-09-20T18:19:21.444302Z","end":"2024-09-20T18:19:21.554464Z","steps":["trace[813399487] 'agreement among raft nodes before linearized reading'  (duration: 108.03856ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.961265Z","caller":"traceutil/trace.go:171","msg":"trace[125269741] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:411; }","duration":"106.622713ms","start":"2024-09-20T18:19:21.854600Z","end":"2024-09-20T18:19:21.961223Z","steps":["trace[125269741] 'read index received'  (duration: 10.382865ms)","trace[125269741] 'applied index is now lower than readState.Index'  (duration: 96.239015ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:19:21.962257Z","caller":"traceutil/trace.go:171","msg":"trace[622144423] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"105.453568ms","start":"2024-09-20T18:19:21.856784Z","end":"2024-09-20T18:19:21.962238Z","steps":["trace[622144423] 'process raft request'  (duration: 104.328353ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.962484Z","caller":"traceutil/trace.go:171","msg":"trace[1676311620] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"107.945568ms","start":"2024-09-20T18:19:21.854521Z","end":"2024-09-20T18:19:21.962467Z","steps":["trace[1676311620] 'process raft request'  (duration: 106.387415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.963144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.476893ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:19:21.964893Z","caller":"traceutil/trace.go:171","msg":"trace[911468214] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:409; }","duration":"110.230982ms","start":"2024-09-20T18:19:21.854646Z","end":"2024-09-20T18:19:21.964877Z","steps":["trace[911468214] 'agreement among raft nodes before linearized reading'  (duration: 108.318206ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:19:21.963644Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.037025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-66scz\" ","response":"range_response_count:1 size:3993"}
	{"level":"info","ts":"2024-09-20T18:19:21.961930Z","caller":"traceutil/trace.go:171","msg":"trace[1847307825] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"107.308099ms","start":"2024-09-20T18:19:21.854610Z","end":"2024-09-20T18:19:21.961918Z","steps":["trace[1847307825] 'process raft request'  (duration: 106.453351ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:21.965508Z","caller":"traceutil/trace.go:171","msg":"trace[534112644] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-66scz; range_end:; response_count:1; response_revision:409; }","duration":"110.908607ms","start":"2024-09-20T18:19:21.854588Z","end":"2024-09-20T18:19:21.965497Z","steps":["trace[534112644] 'agreement among raft nodes before linearized reading'  (duration: 109.014234ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:22.259610Z","caller":"traceutil/trace.go:171","msg":"trace[515300591] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"197.066865ms","start":"2024-09-20T18:19:22.062522Z","end":"2024-09-20T18:19:22.259589Z","steps":["trace[515300591] 'process raft request'  (duration: 97.886637ms)","trace[515300591] 'compare'  (duration: 98.78174ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:19:22.259756Z","caller":"traceutil/trace.go:171","msg":"trace[414203013] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"196.971979ms","start":"2024-09-20T18:19:22.062775Z","end":"2024-09-20T18:19:22.259747Z","steps":["trace[414203013] 'read index received'  (duration: 84.675819ms)","trace[414203013] 'applied index is now lower than readState.Index'  (duration: 112.295168ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:19:22.259853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.062034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T18:19:22.259884Z","caller":"traceutil/trace.go:171","msg":"trace[1096776429] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:413; }","duration":"197.105208ms","start":"2024-09-20T18:19:22.062771Z","end":"2024-09-20T18:19:22.259876Z","steps":["trace[1096776429] 'agreement among raft nodes before linearized reading'  (duration: 197.01208ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:22.260069Z","caller":"traceutil/trace.go:171","msg":"trace[1236995037] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"108.527632ms","start":"2024-09-20T18:19:22.151533Z","end":"2024-09-20T18:19:22.260061Z","steps":["trace[1236995037] 'process raft request'  (duration: 107.765823ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:19:23.355227Z","caller":"traceutil/trace.go:171","msg":"trace[850183716] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"102.197243ms","start":"2024-09-20T18:19:23.253005Z","end":"2024-09-20T18:19:23.355202Z","steps":[],"step_count":0}
	{"level":"info","ts":"2024-09-20T18:19:23.355525Z","caller":"traceutil/trace.go:171","msg":"trace[673964805] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"102.739687ms","start":"2024-09-20T18:19:23.252776Z","end":"2024-09-20T18:19:23.355515Z","steps":[],"step_count":0}
	{"level":"info","ts":"2024-09-20T18:20:56.075302Z","caller":"traceutil/trace.go:171","msg":"trace[1129311574] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"107.719546ms","start":"2024-09-20T18:20:55.967566Z","end":"2024-09-20T18:20:56.075286Z","steps":["trace[1129311574] 'process raft request'  (duration: 107.623877ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:29:10.697461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1534}
	{"level":"info","ts":"2024-09-20T18:29:10.721647Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1534,"took":"23.706571ms","hash":2292866617,"current-db-size-bytes":6184960,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3268608,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-20T18:29:10.721703Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2292866617,"revision":1534,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T18:34:10.702633Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1954}
	{"level":"info","ts":"2024-09-20T18:34:10.718708Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1954,"took":"15.539104ms","hash":442979265,"current-db-size-bytes":6184960,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4608000,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-20T18:34:10.718754Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":442979265,"revision":1954,"compact-revision":1534}
	
	
	==> gcp-auth [167a7699d2ad79a24795ae8d77140ef7ac5625e2824cc3968217f95fcb44cb62] <==
	2024/09/20 18:21:45 Ready to write response ...
	2024/09/20 18:21:45 Ready to marshal response ...
	2024/09/20 18:21:45 Ready to write response ...
	2024/09/20 18:29:56 Ready to marshal response ...
	2024/09/20 18:29:56 Ready to write response ...
	2024/09/20 18:29:58 Ready to marshal response ...
	2024/09/20 18:29:58 Ready to write response ...
	2024/09/20 18:30:12 Ready to marshal response ...
	2024/09/20 18:30:12 Ready to write response ...
	2024/09/20 18:30:22 Ready to marshal response ...
	2024/09/20 18:30:22 Ready to write response ...
	2024/09/20 18:30:44 Ready to marshal response ...
	2024/09/20 18:30:44 Ready to write response ...
	2024/09/20 18:30:44 Ready to marshal response ...
	2024/09/20 18:30:44 Ready to write response ...
	2024/09/20 18:30:56 Ready to marshal response ...
	2024/09/20 18:30:56 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	2024/09/20 18:30:57 Ready to marshal response ...
	2024/09/20 18:30:57 Ready to write response ...
	2024/09/20 18:32:33 Ready to marshal response ...
	2024/09/20 18:32:33 Ready to write response ...
	
	
	==> kernel <==
	 18:34:49 up  2:17,  0 users,  load average: 0.55, 0.43, 0.84
	Linux addons-162403 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0a3bc23a91121ba80e4fb2bf2225373da2f40853ef60f395c1b7cc42335ba90d] <==
	I0920 18:32:43.644698       1 main.go:299] handling current node
	I0920 18:32:53.644854       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:32:53.644890       1 main.go:299] handling current node
	I0920 18:33:03.649654       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:33:03.649704       1 main.go:299] handling current node
	I0920 18:33:13.644741       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:33:13.644794       1 main.go:299] handling current node
	I0920 18:33:23.645090       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:33:23.645125       1 main.go:299] handling current node
	I0920 18:33:33.651225       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:33:33.651263       1 main.go:299] handling current node
	I0920 18:33:43.653338       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:33:43.653391       1 main.go:299] handling current node
	I0920 18:33:53.644247       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:33:53.644291       1 main.go:299] handling current node
	I0920 18:34:03.647088       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:34:03.647124       1 main.go:299] handling current node
	I0920 18:34:13.645102       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:34:13.645135       1 main.go:299] handling current node
	I0920 18:34:23.644431       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:34:23.644465       1 main.go:299] handling current node
	I0920 18:34:33.646216       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:34:33.646264       1 main.go:299] handling current node
	I0920 18:34:43.644404       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 18:34:43.644444       1 main.go:299] handling current node
	
	
	==> kube-apiserver [f38c04f167d002d7eaa0d6babf655173dc59197a2fdcb23e2deaf6c3ee073bc5] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 18:21:34.561794       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.35.12:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.35.12:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.35.12:443: connect: connection refused" logger="UnhandledError"
	I0920 18:21:34.599060       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 18:30:06.424489       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:30:07.441395       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:30:09.437271       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 18:30:11.895465       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:30:12.151933       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.87.74"}
	I0920 18:30:37.871661       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.871723       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.887033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.887175       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.892360       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.892420       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.898321       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.898486       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:30:37.949118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:30:37.949160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 18:30:38.893058       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 18:30:38.950022       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 18:30:38.957732       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 18:30:57.308382       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.229.203"}
	I0920 18:32:33.948290       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.42.120"}
	
	
	==> kube-controller-manager [4b71192f65f2de6df3616ddc8a6620b0c14723ea4d2ff122cde6d87305d3c284] <==
	I0920 18:32:45.504178       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0920 18:32:51.366869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-162403"
	W0920 18:32:56.435000       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:32:56.435047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:33:02.892652       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:33:02.892695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:33:14.040221       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:33:14.040263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:33:15.876928       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:33:15.876974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:33:52.138147       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:33:52.138196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:33:53.913191       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:33:53.913239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:33:54.270398       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:33:54.270443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:34:10.259904       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:34:10.259954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:34:22.682413       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:34:22.682459       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:34:26.138179       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:34:26.138226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:34:47.198204       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:34:47.198252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:34:47.902315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="8.497µs"
	
	
	==> kube-proxy [52c52923ef8ea0fe574b0abbae3590d5fdd4acc2ef2282c98e442504626a5fe6] <==
	I0920 18:19:23.348890       1 server_linux.go:66] "Using iptables proxy"
	I0920 18:19:24.053513       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 18:19:24.053686       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:19:24.461765       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 18:19:24.461911       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:19:24.544987       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:19:24.545696       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:19:24.545778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:19:24.547965       1 config.go:199] "Starting service config controller"
	I0920 18:19:24.549760       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:19:24.549176       1 config.go:328] "Starting node config controller"
	I0920 18:19:24.549328       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:19:24.549802       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:19:24.549809       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:19:24.651660       1 shared_informer.go:320] Caches are synced for node config
	I0920 18:19:24.651660       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:19:24.651702       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [249ac20417667d2249afd8ff66cf57fb6cf92d53e82089ed0ecb0e7fc1a31feb] <==
	W0920 18:19:12.051578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0920 18:19:12.051599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:19:12.051631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:19:12.051630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.052648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 18:19:12.052678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.052829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 18:19:12.052843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.855801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:12.855855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.864661       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:19:12.864714       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:19:12.882432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:19:12.882477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.910952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:12.911024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:12.925403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 18:19:12.925449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.010499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 18:19:13.010542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.081617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:13.081680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:19:13.166464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:19:13.166510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 18:19:15.650022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:34:04 addons-162403 kubelet[1624]: E0920 18:34:04.731131    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857244730843173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:04 addons-162403 kubelet[1624]: E0920 18:34:04.731172    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857244730843173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:09 addons-162403 kubelet[1624]: E0920 18:34:09.447003    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="3994a86e-6df2-4cd1-b7ae-47433e7d9eef"
	Sep 20 18:34:14 addons-162403 kubelet[1624]: E0920 18:34:14.473850    1624 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7, memory: /docker/106a9fd3effc28dc1b06b314d32589d00ce87bf354d54e5aa36fc020c898a4b7/system.slice/kubelet.service"
	Sep 20 18:34:14 addons-162403 kubelet[1624]: E0920 18:34:14.733790    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857254733468595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:14 addons-162403 kubelet[1624]: E0920 18:34:14.733833    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857254733468595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:22 addons-162403 kubelet[1624]: E0920 18:34:22.447046    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="3994a86e-6df2-4cd1-b7ae-47433e7d9eef"
	Sep 20 18:34:24 addons-162403 kubelet[1624]: E0920 18:34:24.736534    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857264736282934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:24 addons-162403 kubelet[1624]: E0920 18:34:24.736570    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857264736282934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:34 addons-162403 kubelet[1624]: E0920 18:34:34.739157    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857274738893835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:34 addons-162403 kubelet[1624]: E0920 18:34:34.739194    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857274738893835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:35 addons-162403 kubelet[1624]: E0920 18:34:35.446534    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="3994a86e-6df2-4cd1-b7ae-47433e7d9eef"
	Sep 20 18:34:44 addons-162403 kubelet[1624]: E0920 18:34:44.742676    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857284742350936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:44 addons-162403 kubelet[1624]: E0920 18:34:44.742723    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726857284742350936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570253,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.131643    1624 scope.go:117] "RemoveContainer" containerID="acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720"
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.155942    1624 scope.go:117] "RemoveContainer" containerID="acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720"
	Sep 20 18:34:49 addons-162403 kubelet[1624]: E0920 18:34:49.156373    1624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720\": container with ID starting with acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720 not found: ID does not exist" containerID="acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720"
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.156425    1624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720"} err="failed to get container status \"acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720\": rpc error: code = NotFound desc = could not find container \"acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720\": container with ID starting with acca616b5cd64d20eb89d02435c825d70fdc8194125bdd7f4ecbd07e63107720 not found: ID does not exist"
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.318186    1624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfh7k\" (UniqueName: \"kubernetes.io/projected/aadc0160-94e3-4273-9d42-d0552af7ad61-kube-api-access-mfh7k\") pod \"aadc0160-94e3-4273-9d42-d0552af7ad61\" (UID: \"aadc0160-94e3-4273-9d42-d0552af7ad61\") "
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.318240    1624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aadc0160-94e3-4273-9d42-d0552af7ad61-tmp-dir\") pod \"aadc0160-94e3-4273-9d42-d0552af7ad61\" (UID: \"aadc0160-94e3-4273-9d42-d0552af7ad61\") "
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.318597    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aadc0160-94e3-4273-9d42-d0552af7ad61-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "aadc0160-94e3-4273-9d42-d0552af7ad61" (UID: "aadc0160-94e3-4273-9d42-d0552af7ad61"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.320321    1624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aadc0160-94e3-4273-9d42-d0552af7ad61-kube-api-access-mfh7k" (OuterVolumeSpecName: "kube-api-access-mfh7k") pod "aadc0160-94e3-4273-9d42-d0552af7ad61" (UID: "aadc0160-94e3-4273-9d42-d0552af7ad61"). InnerVolumeSpecName "kube-api-access-mfh7k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.419317    1624 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mfh7k\" (UniqueName: \"kubernetes.io/projected/aadc0160-94e3-4273-9d42-d0552af7ad61-kube-api-access-mfh7k\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:34:49 addons-162403 kubelet[1624]: I0920 18:34:49.419364    1624 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aadc0160-94e3-4273-9d42-d0552af7ad61-tmp-dir\") on node \"addons-162403\" DevicePath \"\""
	Sep 20 18:34:49 addons-162403 kubelet[1624]: E0920 18:34:49.447388    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="3994a86e-6df2-4cd1-b7ae-47433e7d9eef"
	
	
	==> storage-provisioner [525f045aa748e6ea6058a19f28604c5472b307505ab4e997fc5024dd5e9d9ef2] <==
	I0920 18:20:05.073668       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:20:05.085517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:20:05.085586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:20:05.094317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:20:05.094479       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06!
	I0920 18:20:05.094902       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a6c3edb-f643-4302-b044-b3279df05602", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06 became leader
	I0920 18:20:05.195504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-162403_d7c3519d-575e-4dbc-aeb4-229d05571c06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-162403 -n addons-162403
helpers_test.go:261: (dbg) Run:  kubectl --context addons-162403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-162403 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-162403 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-162403/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 18:21:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4hs2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p4hs2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  13m                  default-scheduler  Successfully assigned default/busybox to addons-162403
	  Normal   Pulling    11m (x4 over 13m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)    kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m2s (x42 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (296.53s)

                                                
                                    

Test pass (299/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.32
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 12.96
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.06
21 TestBinaryMirror 0.76
22 TestOffline 52.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 186.35
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 11.68
38 TestAddons/parallel/CSI 50.02
39 TestAddons/parallel/Headlamp 19.33
40 TestAddons/parallel/CloudSpanner 6.46
41 TestAddons/parallel/LocalPath 12.06
42 TestAddons/parallel/NvidiaDevicePlugin 5.47
43 TestAddons/parallel/Yakd 11.88
44 TestAddons/StoppedEnableDisable 12.11
45 TestCertOptions 31.41
46 TestCertExpiration 225.51
48 TestForceSystemdFlag 31.15
49 TestForceSystemdEnv 25.71
51 TestKVMDriverInstallOrUpdate 4.83
55 TestErrorSpam/setup 22.62
56 TestErrorSpam/start 0.55
57 TestErrorSpam/status 0.86
58 TestErrorSpam/pause 1.49
59 TestErrorSpam/unpause 1.68
60 TestErrorSpam/stop 1.34
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 68.08
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 57.84
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.06
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.26
72 TestFunctional/serial/CacheCmd/cache/add_local 2.11
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 38.97
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.34
83 TestFunctional/serial/LogsFileCmd 1.35
84 TestFunctional/serial/InvalidService 4.35
86 TestFunctional/parallel/ConfigCmd 0.33
87 TestFunctional/parallel/DashboardCmd 12.94
88 TestFunctional/parallel/DryRun 0.36
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 0.93
94 TestFunctional/parallel/ServiceCmdConnect 20.69
95 TestFunctional/parallel/AddonsCmd 0.12
96 TestFunctional/parallel/PersistentVolumeClaim 35.72
98 TestFunctional/parallel/SSHCmd 0.57
99 TestFunctional/parallel/CpCmd 1.66
100 TestFunctional/parallel/MySQL 21.17
101 TestFunctional/parallel/FileSync 0.27
102 TestFunctional/parallel/CertSync 1.59
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
110 TestFunctional/parallel/License 0.66
111 TestFunctional/parallel/Version/short 0.05
112 TestFunctional/parallel/Version/components 0.82
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
117 TestFunctional/parallel/ImageCommands/ImageBuild 4.2
118 TestFunctional/parallel/ImageCommands/Setup 1.93
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 19.2
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.17
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.85
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.75
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.61
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
135 TestFunctional/parallel/ServiceCmd/DeployApp 10.16
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
142 TestFunctional/parallel/ProfileCmd/profile_list 0.4
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
144 TestFunctional/parallel/MountCmd/any-port 8.7
145 TestFunctional/parallel/MountCmd/specific-port 1.74
146 TestFunctional/parallel/ServiceCmd/List 0.88
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.89
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
150 TestFunctional/parallel/ServiceCmd/Format 0.54
151 TestFunctional/parallel/ServiceCmd/URL 0.53
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 155.7
159 TestMultiControlPlane/serial/DeployApp 9.34
160 TestMultiControlPlane/serial/PingHostFromPods 1.01
161 TestMultiControlPlane/serial/AddWorkerNode 30.08
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
164 TestMultiControlPlane/serial/CopyFile 15.43
165 TestMultiControlPlane/serial/StopSecondaryNode 12.52
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
167 TestMultiControlPlane/serial/RestartSecondaryNode 32.53
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 168.19
170 TestMultiControlPlane/serial/DeleteSecondaryNode 11.27
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
172 TestMultiControlPlane/serial/StopCluster 35.54
173 TestMultiControlPlane/serial/RestartCluster 107.58
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
175 TestMultiControlPlane/serial/AddSecondaryNode 66.25
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
180 TestJSONOutput/start/Command 67.69
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.66
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.58
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.73
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
205 TestKicCustomNetwork/create_custom_network 35.54
206 TestKicCustomNetwork/use_default_bridge_network 26.1
207 TestKicExistingNetwork 24.15
208 TestKicCustomSubnet 26.59
209 TestKicStaticIP 24.37
210 TestMainNoArgs 0.04
211 TestMinikubeProfile 49.92
214 TestMountStart/serial/StartWithMountFirst 6.27
215 TestMountStart/serial/VerifyMountFirst 0.24
216 TestMountStart/serial/StartWithMountSecond 9.07
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.62
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.17
221 TestMountStart/serial/RestartStopped 8.01
222 TestMountStart/serial/VerifyMountPostStop 0.23
225 TestMultiNode/serial/FreshStart2Nodes 95.57
226 TestMultiNode/serial/DeployApp2Nodes 5.99
227 TestMultiNode/serial/PingHostFrom2Pods 0.7
228 TestMultiNode/serial/AddNode 26.23
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.6
231 TestMultiNode/serial/CopyFile 8.85
232 TestMultiNode/serial/StopNode 2.08
233 TestMultiNode/serial/StartAfterStop 9.04
234 TestMultiNode/serial/RestartKeepsNodes 79.72
235 TestMultiNode/serial/DeleteNode 4.88
236 TestMultiNode/serial/StopMultiNode 23.68
237 TestMultiNode/serial/RestartMultiNode 58.32
238 TestMultiNode/serial/ValidateNameConflict 26
243 TestPreload 115.89
245 TestScheduledStopUnix 99.84
248 TestInsufficientStorage 12.74
249 TestRunningBinaryUpgrade 74.84
251 TestKubernetesUpgrade 336.8
252 TestMissingContainerUpgrade 104.59
254 TestStoppedBinaryUpgrade/Setup 2.67
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 31.66
260 TestStoppedBinaryUpgrade/Upgrade 109.1
265 TestNetworkPlugins/group/false 6.84
269 TestNoKubernetes/serial/StartWithStopK8s 9.48
270 TestNoKubernetes/serial/Start 5.68
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
272 TestNoKubernetes/serial/ProfileList 1.09
273 TestNoKubernetes/serial/Stop 2.84
274 TestNoKubernetes/serial/StartNoArgs 9.72
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
283 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
285 TestPause/serial/Start 78.37
286 TestNetworkPlugins/group/auto/Start 71.59
287 TestPause/serial/SecondStartNoReconfiguration 23.76
288 TestPause/serial/Pause 0.8
289 TestNetworkPlugins/group/auto/KubeletFlags 0.28
290 TestNetworkPlugins/group/auto/NetCatPod 9.23
291 TestPause/serial/VerifyStatus 0.32
292 TestPause/serial/Unpause 0.75
293 TestPause/serial/PauseAgain 0.95
294 TestPause/serial/DeletePaused 2.68
295 TestNetworkPlugins/group/kindnet/Start 69.2
296 TestPause/serial/VerifyDeletedResources 0.69
297 TestNetworkPlugins/group/calico/Start 58.48
298 TestNetworkPlugins/group/auto/DNS 0.19
299 TestNetworkPlugins/group/auto/Localhost 0.14
300 TestNetworkPlugins/group/auto/HairPin 0.11
301 TestNetworkPlugins/group/custom-flannel/Start 51.8
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/calico/KubeletFlags 0.25
304 TestNetworkPlugins/group/calico/NetCatPod 10.17
305 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
306 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
307 TestNetworkPlugins/group/kindnet/NetCatPod 10.17
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
309 TestNetworkPlugins/group/calico/DNS 0.13
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
311 TestNetworkPlugins/group/calico/Localhost 0.15
312 TestNetworkPlugins/group/calico/HairPin 0.13
313 TestNetworkPlugins/group/kindnet/DNS 0.14
314 TestNetworkPlugins/group/kindnet/Localhost 0.11
315 TestNetworkPlugins/group/custom-flannel/DNS 0.13
316 TestNetworkPlugins/group/kindnet/HairPin 0.13
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
319 TestNetworkPlugins/group/enable-default-cni/Start 67.29
320 TestNetworkPlugins/group/flannel/Start 50.87
321 TestNetworkPlugins/group/bridge/Start 73.17
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
325 TestNetworkPlugins/group/flannel/NetCatPod 10.19
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.18
327 TestNetworkPlugins/group/flannel/DNS 0.13
328 TestNetworkPlugins/group/flannel/Localhost 0.11
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
330 TestNetworkPlugins/group/flannel/HairPin 0.12
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
334 TestNetworkPlugins/group/bridge/NetCatPod 10.24
336 TestStartStop/group/old-k8s-version/serial/FirstStart 149.78
337 TestNetworkPlugins/group/bridge/DNS 0.19
338 TestNetworkPlugins/group/bridge/Localhost 0.12
339 TestNetworkPlugins/group/bridge/HairPin 0.12
341 TestStartStop/group/no-preload/serial/FirstStart 63.67
343 TestStartStop/group/embed-certs/serial/FirstStart 76.75
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.29
346 TestStartStop/group/no-preload/serial/DeployApp 12.24
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.93
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
351 TestStartStop/group/no-preload/serial/Stop 11.85
352 TestStartStop/group/embed-certs/serial/DeployApp 11.23
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
354 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.74
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
356 TestStartStop/group/no-preload/serial/SecondStart 263.04
357 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
358 TestStartStop/group/embed-certs/serial/Stop 12.06
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/embed-certs/serial/SecondStart 264.38
361 TestStartStop/group/old-k8s-version/serial/DeployApp 10.36
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.84
363 TestStartStop/group/old-k8s-version/serial/Stop 11.95
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
365 TestStartStop/group/old-k8s-version/serial/SecondStart 139.63
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
368 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
369 TestStartStop/group/old-k8s-version/serial/Pause 2.51
371 TestStartStop/group/newest-cni/serial/FirstStart 27.61
372 TestStartStop/group/newest-cni/serial/DeployApp 0
373 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
374 TestStartStop/group/newest-cni/serial/Stop 1.2
375 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
376 TestStartStop/group/newest-cni/serial/SecondStart 13.62
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
380 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/newest-cni/serial/Pause 2.91
385 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
387 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
388 TestStartStop/group/no-preload/serial/Pause 3.02
389 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
392 TestStartStop/group/embed-certs/serial/Pause 2.6
x
+
TestDownloadOnly/v1.20.0/json-events (14.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-536443 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-536443 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.318927486s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 18:18:23.045938  672823 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0920 18:18:23.046039  672823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-536443
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-536443: exit status 85 (67.298361ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-536443 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |          |
	|         | -p download-only-536443        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:18:08
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:18:08.764553  672835 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:18:08.764674  672835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:08.764684  672835 out.go:358] Setting ErrFile to fd 2...
	I0920 18:18:08.764688  672835 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:08.764866  672835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	W0920 18:18:08.764985  672835 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19678-664237/.minikube/config/config.json: open /home/jenkins/minikube-integration/19678-664237/.minikube/config/config.json: no such file or directory
	I0920 18:18:08.765557  672835 out.go:352] Setting JSON to true
	I0920 18:18:08.766620  672835 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7233,"bootTime":1726849056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:18:08.766727  672835 start.go:139] virtualization: kvm guest
	I0920 18:18:08.769319  672835 out.go:97] [download-only-536443] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 18:18:08.769437  672835 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 18:18:08.769497  672835 notify.go:220] Checking for updates...
	I0920 18:18:08.771034  672835 out.go:169] MINIKUBE_LOCATION=19678
	I0920 18:18:08.772542  672835 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:18:08.773929  672835 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:18:08.775555  672835 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	I0920 18:18:08.777092  672835 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 18:18:08.779865  672835 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 18:18:08.780214  672835 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:18:08.803048  672835 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:18:08.803121  672835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:08.849362  672835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:18:08.838907689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:08.849474  672835 docker.go:318] overlay module found
	I0920 18:18:08.851263  672835 out.go:97] Using the docker driver based on user configuration
	I0920 18:18:08.851295  672835 start.go:297] selected driver: docker
	I0920 18:18:08.851303  672835 start.go:901] validating driver "docker" against <nil>
	I0920 18:18:08.851412  672835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:08.897030  672835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:18:08.888068293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:08.897222  672835 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:18:08.897751  672835 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0920 18:18:08.897909  672835 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:18:08.900136  672835 out.go:169] Using Docker driver with root privileges
	I0920 18:18:08.901565  672835 cni.go:84] Creating CNI manager for ""
	I0920 18:18:08.901632  672835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:18:08.901645  672835 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:18:08.901716  672835 start.go:340] cluster config:
	{Name:download-only-536443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-536443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:08.903198  672835 out.go:97] Starting "download-only-536443" primary control-plane node in "download-only-536443" cluster
	I0920 18:18:08.903239  672835 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:18:08.904642  672835 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:18:08.904672  672835 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:08.904780  672835 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:18:08.920634  672835 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:18:08.920812  672835 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:18:08.920889  672835 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:18:09.010774  672835 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:18:09.010806  672835 cache.go:56] Caching tarball of preloaded images
	I0920 18:18:09.011015  672835 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:18:09.013019  672835 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 18:18:09.013038  672835 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0920 18:18:09.132733  672835 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 18:18:13.205212  672835 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	
	
	* The control-plane node download-only-536443 host does not exist
	  To start a cluster, run: "minikube start -p download-only-536443"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-536443
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-183655 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-183655 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.960295135s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 18:18:36.414009  672823 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 18:18:36.414062  672823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-183655
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-183655: exit status 85 (60.545208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-536443 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | -p download-only-536443        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| delete  | -p download-only-536443        | download-only-536443 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC | 20 Sep 24 18:18 UTC |
	| start   | -o=json --download-only        | download-only-183655 | jenkins | v1.34.0 | 20 Sep 24 18:18 UTC |                     |
	|         | -p download-only-183655        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:18:23
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:18:23.492679  673188 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:18:23.492915  673188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:23.492923  673188 out.go:358] Setting ErrFile to fd 2...
	I0920 18:18:23.492927  673188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:23.493095  673188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:18:23.493742  673188 out.go:352] Setting JSON to true
	I0920 18:18:23.494719  673188 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7247,"bootTime":1726849056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:18:23.494818  673188 start.go:139] virtualization: kvm guest
	I0920 18:18:23.497864  673188 out.go:97] [download-only-183655] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:18:23.498059  673188 notify.go:220] Checking for updates...
	I0920 18:18:23.499791  673188 out.go:169] MINIKUBE_LOCATION=19678
	I0920 18:18:23.501731  673188 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:18:23.503261  673188 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:18:23.504968  673188 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	I0920 18:18:23.506864  673188 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 18:18:23.510059  673188 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 18:18:23.510317  673188 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:18:23.532283  673188 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:18:23.532371  673188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:23.577655  673188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:18:23.568809387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:23.577773  673188 docker.go:318] overlay module found
	I0920 18:18:23.579536  673188 out.go:97] Using the docker driver based on user configuration
	I0920 18:18:23.579567  673188 start.go:297] selected driver: docker
	I0920 18:18:23.579573  673188 start.go:901] validating driver "docker" against <nil>
	I0920 18:18:23.579676  673188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:18:23.625568  673188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 18:18:23.616673755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:18:23.625731  673188 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:18:23.626250  673188 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0920 18:18:23.626393  673188 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:18:23.628491  673188 out.go:169] Using Docker driver with root privileges
	I0920 18:18:23.629843  673188 cni.go:84] Creating CNI manager for ""
	I0920 18:18:23.629909  673188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:18:23.629922  673188 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:18:23.630000  673188 start.go:340] cluster config:
	{Name:download-only-183655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-183655 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:18:23.631282  673188 out.go:97] Starting "download-only-183655" primary control-plane node in "download-only-183655" cluster
	I0920 18:18:23.631299  673188 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:18:23.632625  673188 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:18:23.632655  673188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:23.632769  673188 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:18:23.648704  673188 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:18:23.648878  673188 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:18:23.648899  673188 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:18:23.648906  673188 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:18:23.648920  673188 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:18:24.108024  673188 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 18:18:24.108064  673188 cache.go:56] Caching tarball of preloaded images
	I0920 18:18:24.108250  673188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:18:24.110475  673188 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 18:18:24.110508  673188 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0920 18:18:24.218254  673188 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19678-664237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-183655 host does not exist
	  To start a cluster, run: "minikube start -p download-only-183655"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-183655
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.06s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-729301 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-729301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-729301
--- PASS: TestDownloadOnlyKic (1.06s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 18:18:38.104773  672823 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-249385 --alsologtostderr --binary-mirror http://127.0.0.1:43551 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-249385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-249385
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (52.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-822485 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-822485 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (49.933530609s)
helpers_test.go:175: Cleaning up "offline-crio-822485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-822485
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-822485: (2.60508212s)
--- PASS: TestOffline (52.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-162403
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-162403: exit status 85 (51.219217ms)

                                                
                                                
-- stdout --
	* Profile "addons-162403" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-162403"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-162403
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-162403: exit status 85 (52.733003ms)

                                                
                                                
-- stdout --
	* Profile "addons-162403" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-162403"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (186.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-162403 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-162403 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m6.350150063s)
--- PASS: TestAddons/Setup (186.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-162403 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-162403 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cccjv" [691fe89b-d454-449c-a7d1-32dd6ed976d1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004167879s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-162403
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-162403: (5.671645339s)
--- PASS: TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 18:29:48.085733  672823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.957208ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-162403 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-162403 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [626ca8ac-1170-4b8d-b798-2dfc02ce91f3] Pending
helpers_test.go:344: "task-pv-pod" [626ca8ac-1170-4b8d-b798-2dfc02ce91f3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [626ca8ac-1170-4b8d-b798-2dfc02ce91f3] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003449126s
addons_test.go:528: (dbg) Run:  kubectl --context addons-162403 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-162403 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-162403 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-162403 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-162403 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-162403 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-162403 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [020b450c-7b94-499c-9b42-4d20e6a0595f] Pending
helpers_test.go:344: "task-pv-pod-restore" [020b450c-7b94-499c-9b42-4d20e6a0595f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [020b450c-7b94-499c-9b42-4d20e6a0595f] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003594523s
addons_test.go:570: (dbg) Run:  kubectl --context addons-162403 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-162403 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-162403 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-162403 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.623933497s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-162403 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-jz68p" [1c69e518-81ad-401c-9b93-210a1a8718ac] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-jz68p" [1c69e518-81ad-401c-9b93-210a1a8718ac] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.00373359s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-162403 addons disable headlamp --alsologtostderr -v=1: (5.590444082s)
--- PASS: TestAddons/parallel/Headlamp (19.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-jk76f" [fe2f940d-31fc-402d-adcd-8f543f61f039] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004500538s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-162403
--- PASS: TestAddons/parallel/CloudSpanner (6.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-162403 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-162403 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-162403 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [709b2fc7-78c9-4659-b518-70c3e912252d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [709b2fc7-78c9-4659-b518-70c3e912252d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [709b2fc7-78c9-4659-b518-70c3e912252d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003324239s
addons_test.go:938: (dbg) Run:  kubectl --context addons-162403 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 ssh "cat /opt/local-path-provisioner/pvc-7362d9da-c19d-46d1-ab52-e395c2ebef40_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-162403 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-162403 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vkrvk" [e7dcaefe-b427-4947-b9f7-651ee1b219f8] Running
I0920 18:29:48.089620  672823 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 18:29:48.089644  672823 kapi.go:107] duration metric: took 3.944997ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003450781s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-162403
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-m57xt" [26a83085-95f7-45c8-9ea8-090cc1dc9c79] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003955781s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-162403 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-162403 addons disable yakd --alsologtostderr -v=1: (5.874801762s)
--- PASS: TestAddons/parallel/Yakd (11.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-162403
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-162403: (11.869619933s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-162403
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-162403
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-162403
--- PASS: TestAddons/StoppedEnableDisable (12.11s)

                                                
                                    
x
+
TestCertOptions (31.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-666428 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-666428 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (28.905150434s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-666428 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-666428 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-666428 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-666428" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-666428
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-666428: (1.85262906s)
--- PASS: TestCertOptions (31.41s)

                                                
                                    
x
+
TestCertExpiration (225.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-376478 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-376478 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (29.228459963s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-376478 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-376478 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (13.861622331s)
helpers_test.go:175: Cleaning up "cert-expiration-376478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-376478
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-376478: (2.422372348s)
--- PASS: TestCertExpiration (225.51s)

                                                
                                    
x
+
TestForceSystemdFlag (31.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-312943 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-312943 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.352527215s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-312943 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-312943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-312943
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-312943: (2.51303437s)
--- PASS: TestForceSystemdFlag (31.15s)

                                                
                                    
x
+
TestForceSystemdEnv (25.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-297363 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-297363 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.36729261s)
helpers_test.go:175: Cleaning up "force-systemd-env-297363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-297363
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-297363: (2.341598001s)
--- PASS: TestForceSystemdEnv (25.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.83s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 19:05:50.428950  672823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 19:05:50.429091  672823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 19:05:50.461764  672823 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 19:05:50.462083  672823 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 19:05:50.462154  672823 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2045851888/001/docker-machine-driver-kvm2
I0920 19:05:50.701839  672823 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2045851888/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000125890 gz:0xc000125898 tar:0xc000125800 tar.bz2:0xc000125850 tar.gz:0xc000125860 tar.xz:0xc000125870 tar.zst:0xc000125880 tbz2:0xc000125850 tgz:0xc000125860 txz:0xc000125870 tzst:0xc000125880 xz:0xc0001258a0 zip:0xc0001258b0 zst:0xc0001258a8] Getters:map[file:0xc0013e1170 http:0xc000783360 https:0xc0007833b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 19:05:50.701895  672823 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2045851888/001/docker-machine-driver-kvm2
I0920 19:05:53.357612  672823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 19:05:53.357714  672823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 19:05:53.388107  672823 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 19:05:53.388139  672823 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 19:05:53.388200  672823 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 19:05:53.388232  672823 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2045851888/002/docker-machine-driver-kvm2
I0920 19:05:53.450193  672823 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2045851888/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000125890 gz:0xc000125898 tar:0xc000125800 tar.bz2:0xc000125850 tar.gz:0xc000125860 tar.xz:0xc000125870 tar.zst:0xc000125880 tbz2:0xc000125850 tgz:0xc000125860 txz:0xc000125870 tzst:0xc000125880 xz:0xc0001258a0 zip:0xc0001258b0 zst:0xc0001258a8] Getters:map[file:0xc000afd6d0 http:0xc0007aa9b0 https:0xc0007aaa00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 19:05:53.450245  672823 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2045851888/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.83s)

                                                
                                    
x
+
TestErrorSpam/setup (22.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-813982 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-813982 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-813982 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-813982 --driver=docker  --container-runtime=crio: (22.615589899s)
--- PASS: TestErrorSpam/setup (22.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 stop: (1.172451067s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813982 --log_dir /tmp/nospam-813982 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19678-664237/.minikube/files/etc/test/nested/copy/672823/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-145666 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0920 18:36:45.571227  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:45.577687  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:45.589075  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:45.610485  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:45.651894  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:45.733368  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:45.894904  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:46.216599  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:46.858720  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:48.140256  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-145666 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.07919092s)
--- PASS: TestFunctional/serial/StartWithProxy (68.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (57.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 18:36:48.622893  672823 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-145666 --alsologtostderr -v=8
E0920 18:36:50.702604  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:36:55.824855  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:37:06.067147  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:37:26.548980  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-145666 --alsologtostderr -v=8: (57.8398199s)
functional_test.go:663: soft start took 57.840542505s for "functional-145666" cluster.
I0920 18:37:46.463101  672823 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (57.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-145666 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 cache add registry.k8s.io/pause:3.1: (1.0451253s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 cache add registry.k8s.io/pause:3.3: (1.162705145s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 cache add registry.k8s.io/pause:latest: (1.052902504s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-145666 /tmp/TestFunctionalserialCacheCmdcacheadd_local3308763288/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cache add minikube-local-cache-test:functional-145666
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 cache add minikube-local-cache-test:functional-145666: (1.775166765s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cache delete minikube-local-cache-test:functional-145666
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-145666
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.010651ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 kubectl -- --context functional-145666 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-145666 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-145666 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0920 18:38:07.511545  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-145666 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.972260116s)
functional_test.go:761: restart took 38.972398895s for "functional-145666" cluster.
I0920 18:38:33.262263  672823 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-145666 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 logs: (1.337718271s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 logs --file /tmp/TestFunctionalserialLogsFileCmd511275802/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 logs --file /tmp/TestFunctionalserialLogsFileCmd511275802/001/logs.txt: (1.348620948s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-145666 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-145666
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-145666: exit status 115 (323.536348ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31671 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-145666 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 config get cpus: exit status 14 (66.432838ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 config get cpus: exit status 14 (48.808819ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-145666 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-145666 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 718983: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-145666 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-145666 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (135.459429ms)

                                                
                                                
-- stdout --
	* [functional-145666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:39:14.895835  717735 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:39:14.896118  717735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:39:14.896128  717735 out.go:358] Setting ErrFile to fd 2...
	I0920 18:39:14.896132  717735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:39:14.896361  717735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:39:14.896949  717735 out.go:352] Setting JSON to false
	I0920 18:39:14.898021  717735 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8499,"bootTime":1726849056,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:39:14.898125  717735 start.go:139] virtualization: kvm guest
	I0920 18:39:14.900096  717735 out.go:177] * [functional-145666] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:39:14.901420  717735 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:39:14.901503  717735 notify.go:220] Checking for updates...
	I0920 18:39:14.904003  717735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:39:14.905709  717735 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:39:14.906945  717735 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	I0920 18:39:14.908185  717735 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:39:14.909302  717735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:39:14.910927  717735 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:39:14.911433  717735 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:39:14.933325  717735 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:39:14.933431  717735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:39:14.978341  717735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:39:14.969096559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:39:14.978449  717735 docker.go:318] overlay module found
	I0920 18:39:14.981660  717735 out.go:177] * Using the docker driver based on existing profile
	I0920 18:39:14.982798  717735 start.go:297] selected driver: docker
	I0920 18:39:14.982810  717735 start.go:901] validating driver "docker" against &{Name:functional-145666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-145666 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:39:14.982897  717735 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:39:14.985145  717735 out.go:201] 
	W0920 18:39:14.986319  717735 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 18:39:14.987462  717735 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-145666 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-145666 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-145666 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (151.608768ms)

                                                
                                                
-- stdout --
	* [functional-145666] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:39:14.752530  717631 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:39:14.752636  717631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:39:14.752645  717631 out.go:358] Setting ErrFile to fd 2...
	I0920 18:39:14.752649  717631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:39:14.752935  717631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:39:14.753470  717631 out.go:352] Setting JSON to false
	I0920 18:39:14.754530  717631 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8499,"bootTime":1726849056,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:39:14.754603  717631 start.go:139] virtualization: kvm guest
	I0920 18:39:14.756789  717631 out.go:177] * [functional-145666] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 18:39:14.759356  717631 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:39:14.759410  717631 notify.go:220] Checking for updates...
	I0920 18:39:14.762071  717631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:39:14.763279  717631 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 18:39:14.764548  717631 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	I0920 18:39:14.765730  717631 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:39:14.766928  717631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:39:14.768536  717631 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:39:14.768984  717631 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:39:14.795020  717631 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:39:14.795156  717631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:39:14.842916  717631 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:39:14.833081559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:39:14.843056  717631 docker.go:318] overlay module found
	I0920 18:39:14.845060  717631 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 18:39:14.846311  717631 start.go:297] selected driver: docker
	I0920 18:39:14.846323  717631 start.go:901] validating driver "docker" against &{Name:functional-145666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-145666 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:39:14.846419  717631 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:39:14.848615  717631 out.go:201] 
	W0920 18:39:14.850337  717631 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:39:14.851856  717631 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-145666 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-145666 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4r9vs" [fef838b8-605e-46ba-b205-047cc78acf6f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4r9vs" [fef838b8-605e-46ba-b205-047cc78acf6f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.003563484s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32024
functional_test.go:1675: http://192.168.49.2:32024: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4r9vs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32024
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cff34801-c76e-4ae3-8ba7-9f25bcf64fcc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003950058s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-145666 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-145666 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-145666 get pvc myclaim -o=json
I0920 18:38:48.349885  672823 retry.go:31] will retry after 1.140605323s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:e6a05686-9d0e-48f9-b66b-e54b955e64e1 ResourceVersion:733 Generation:0 CreationTimestamp:2024-09-20 18:38:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-e6a05686-9d0e-48f9-b66b-e54b955e64e1 StorageClassName:0xc00199ca90 VolumeMode:0xc00199caa0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-145666 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-145666 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d870648a-8508-470b-8879-356e4e18551f] Pending
helpers_test.go:344: "sp-pod" [d870648a-8508-470b-8879-356e4e18551f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d870648a-8508-470b-8879-356e4e18551f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003385312s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-145666 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-145666 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-145666 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4ed1c630-3937-41ae-92a6-c24e085c582b] Pending
helpers_test.go:344: "sp-pod" [4ed1c630-3937-41ae-92a6-c24e085c582b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4ed1c630-3937-41ae-92a6-c24e085c582b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004135101s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-145666 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh -n functional-145666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cp functional-145666:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd462504998/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh -n functional-145666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh -n functional-145666 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-145666 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-qwhn9" [152ebf1d-3b40-4074-9a79-39be82616abe] Pending
helpers_test.go:344: "mysql-6cdb49bbb-qwhn9" [152ebf1d-3b40-4074-9a79-39be82616abe] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-qwhn9" [152ebf1d-3b40-4074-9a79-39be82616abe] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.002734931s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-145666 exec mysql-6cdb49bbb-qwhn9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-145666 exec mysql-6cdb49bbb-qwhn9 -- mysql -ppassword -e "show databases;": exit status 1 (103.590994ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:38:59.662319  672823 retry.go:31] will retry after 1.015129709s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-145666 exec mysql-6cdb49bbb-qwhn9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-145666 exec mysql-6cdb49bbb-qwhn9 -- mysql -ppassword -e "show databases;": exit status 1 (375.941237ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:39:01.054201  672823 retry.go:31] will retry after 1.364208412s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-145666 exec mysql-6cdb49bbb-qwhn9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/672823/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo cat /etc/test/nested/copy/672823/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/672823.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo cat /etc/ssl/certs/672823.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/672823.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo cat /usr/share/ca-certificates/672823.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/6728232.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo cat /etc/ssl/certs/6728232.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/6728232.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo cat /usr/share/ca-certificates/6728232.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-145666 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 ssh "sudo systemctl is-active docker": exit status 1 (272.27844ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 ssh "sudo systemctl is-active containerd": exit status 1 (281.864784ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-145666 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-145666
localhost/kicbase/echo-server:functional-145666
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-145666 image ls --format short --alsologtostderr:
I0920 18:39:16.587254  718921 out.go:345] Setting OutFile to fd 1 ...
I0920 18:39:16.587447  718921 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:16.587458  718921 out.go:358] Setting ErrFile to fd 2...
I0920 18:39:16.587464  718921 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:16.587682  718921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
I0920 18:39:16.588284  718921 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:16.588406  718921 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:16.588835  718921 cli_runner.go:164] Run: docker container inspect functional-145666 --format={{.State.Status}}
I0920 18:39:16.606551  718921 ssh_runner.go:195] Run: systemctl --version
I0920 18:39:16.606599  718921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-145666
I0920 18:39:16.622558  718921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/functional-145666/id_rsa Username:docker}
I0920 18:39:16.791365  718921 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-145666 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-145666  | d3e15e49c9e53 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/nginx                 | alpine             | c7b4f26a7d93f | 44.6MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| localhost/kicbase/echo-server           | functional-145666  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-145666 image ls --format table --alsologtostderr:
I0920 18:39:17.349036  719291 out.go:345] Setting OutFile to fd 1 ...
I0920 18:39:17.349626  719291 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:17.349642  719291 out.go:358] Setting ErrFile to fd 2...
I0920 18:39:17.349650  719291 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:17.350069  719291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
I0920 18:39:17.351724  719291 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:17.351882  719291 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:17.352350  719291 cli_runner.go:164] Run: docker container inspect functional-145666 --format={{.State.Status}}
I0920 18:39:17.370082  719291 ssh_runner.go:195] Run: systemctl --version
I0920 18:39:17.370131  719291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-145666
I0920 18:39:17.387048  719291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/functional-145666/id_rsa Username:docker}
I0920 18:39:17.483538  719291 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-145666 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-145666"],"size":"4943877"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"da86e6ba6ca197bf6b
c5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDiges
ts":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92
f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":["docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44647101"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1
a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/bu
sybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d3e15e49c9e5395a7a90a9612cc251d3df186a376b142e3cd397c166834e9023","repoDigests":["localhost/minikube-local-cache-test@sha256:906a57ca690653febb08efb850cc5b7c7a3d88afda74a7e36079f02fa8d27ecc"],"repoTags":["localhost/minikube-local-cache-test:functional-145666"],"size":"3328"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["re
gistry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-145666 image ls --format json --alsologtostderr:
I0920 18:39:17.129983  719202 out.go:345] Setting OutFile to fd 1 ...
I0920 18:39:17.130142  719202 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:17.130155  719202 out.go:358] Setting ErrFile to fd 2...
I0920 18:39:17.130162  719202 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:17.130393  719202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
I0920 18:39:17.131091  719202 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:17.131207  719202 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:17.131601  719202 cli_runner.go:164] Run: docker container inspect functional-145666 --format={{.State.Status}}
I0920 18:39:17.151989  719202 ssh_runner.go:195] Run: systemctl --version
I0920 18:39:17.152050  719202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-145666
I0920 18:39:17.172478  719202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/functional-145666/id_rsa Username:docker}
I0920 18:39:17.267472  719202 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-145666 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests:
- docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "44647101"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-145666
size: "4943877"
- id: d3e15e49c9e5395a7a90a9612cc251d3df186a376b142e3cd397c166834e9023
repoDigests:
- localhost/minikube-local-cache-test@sha256:906a57ca690653febb08efb850cc5b7c7a3d88afda74a7e36079f02fa8d27ecc
repoTags:
- localhost/minikube-local-cache-test:functional-145666
size: "3328"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-145666 image ls --format yaml --alsologtostderr:
I0920 18:39:16.915161  719020 out.go:345] Setting OutFile to fd 1 ...
I0920 18:39:16.915281  719020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:16.915292  719020 out.go:358] Setting ErrFile to fd 2...
I0920 18:39:16.915299  719020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:16.915555  719020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
I0920 18:39:16.916422  719020 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:16.916576  719020 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:16.917120  719020 cli_runner.go:164] Run: docker container inspect functional-145666 --format={{.State.Status}}
I0920 18:39:16.935877  719020 ssh_runner.go:195] Run: systemctl --version
I0920 18:39:16.935924  719020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-145666
I0920 18:39:16.953978  719020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/functional-145666/id_rsa Username:docker}
I0920 18:39:17.047342  719020 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 ssh pgrep buildkitd: exit status 1 (251.207018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image build -t localhost/my-image:functional-145666 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 image build -t localhost/my-image:functional-145666 testdata/build --alsologtostderr: (3.739617839s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-145666 image build -t localhost/my-image:functional-145666 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 20325be9de7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-145666
--> ca7199f0121
Successfully tagged localhost/my-image:functional-145666
ca7199f01219dea6a4b5b6938d803c34543adfa8215f62684641f1db8a98bb53
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-145666 image build -t localhost/my-image:functional-145666 testdata/build --alsologtostderr:
I0920 18:39:17.183674  719221 out.go:345] Setting OutFile to fd 1 ...
I0920 18:39:17.184039  719221 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:17.184170  719221 out.go:358] Setting ErrFile to fd 2...
I0920 18:39:17.184191  719221 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:39:17.184478  719221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
I0920 18:39:17.185102  719221 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:17.185631  719221 config.go:182] Loaded profile config "functional-145666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:39:17.186038  719221 cli_runner.go:164] Run: docker container inspect functional-145666 --format={{.State.Status}}
I0920 18:39:17.203523  719221 ssh_runner.go:195] Run: systemctl --version
I0920 18:39:17.203577  719221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-145666
I0920 18:39:17.221517  719221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/functional-145666/id_rsa Username:docker}
I0920 18:39:17.314949  719221 build_images.go:161] Building image from path: /tmp/build.573240037.tar
I0920 18:39:17.315054  719221 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 18:39:17.324109  719221 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.573240037.tar
I0920 18:39:17.327863  719221 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.573240037.tar: stat -c "%s %y" /var/lib/minikube/build/build.573240037.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.573240037.tar': No such file or directory
I0920 18:39:17.327895  719221 ssh_runner.go:362] scp /tmp/build.573240037.tar --> /var/lib/minikube/build/build.573240037.tar (3072 bytes)
I0920 18:39:17.353061  719221 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.573240037
I0920 18:39:17.361382  719221 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.573240037 -xf /var/lib/minikube/build/build.573240037.tar
I0920 18:39:17.371214  719221 crio.go:315] Building image: /var/lib/minikube/build/build.573240037
I0920 18:39:17.371293  719221 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-145666 /var/lib/minikube/build/build.573240037 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0920 18:39:20.852921  719221 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-145666 /var/lib/minikube/build/build.573240037 --cgroup-manager=cgroupfs: (3.481599269s)
I0920 18:39:20.853002  719221 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.573240037
I0920 18:39:20.861243  719221 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.573240037.tar
I0920 18:39:20.869128  719221 build_images.go:217] Built localhost/my-image:functional-145666 from /tmp/build.573240037.tar
I0920 18:39:20.869161  719221 build_images.go:133] succeeded building to: functional-145666
I0920 18:39:20.869168  719221 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls
2024/09/20 18:39:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.907714947s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-145666
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-145666 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-145666 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-145666 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-145666 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 712635: os: process already finished
helpers_test.go:508: unable to kill pid 712424: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image load --daemon kicbase/echo-server:functional-145666 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 image load --daemon kicbase/echo-server:functional-145666 --alsologtostderr: (1.026922802s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-145666 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-145666 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [21f57baf-2a85-44a9-88a2-625054938b7d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [21f57baf-2a85-44a9-88a2-625054938b7d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.003791964s
I0920 18:39:02.553293  672823 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image load --daemon kicbase/echo-server:functional-145666 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-145666
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image load --daemon kicbase/echo-server:functional-145666 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image save kicbase/echo-server:functional-145666 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 image save kicbase/echo-server:functional-145666 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.845930642s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image rm kicbase/echo-server:functional-145666 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-145666 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.381450087s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-145666
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 image save --daemon kicbase/echo-server:functional-145666 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-145666
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-145666 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-145666 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-145666 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-r86zn" [30dbcd25-5e9f-45ea-ad84-4601718c3fb1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-r86zn" [30dbcd25-5e9f-45ea-ad84-4601718c3fb1] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00333991s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.50.200 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-145666 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "352.264713ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.509239ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "320.213229ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "45.511231ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdany-port3354663234/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726857543901442723" to /tmp/TestFunctionalparallelMountCmdany-port3354663234/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726857543901442723" to /tmp/TestFunctionalparallelMountCmdany-port3354663234/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726857543901442723" to /tmp/TestFunctionalparallelMountCmdany-port3354663234/001/test-1726857543901442723
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.542503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:39:04.197297  672823 retry.go:31] will retry after 399.790333ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 18:39 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 18:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 18:39 test-1726857543901442723
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh cat /mount-9p/test-1726857543901442723
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-145666 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f29c2fa1-ac97-49d5-8b7e-5ce8d1d8afeb] Pending
helpers_test.go:344: "busybox-mount" [f29c2fa1-ac97-49d5-8b7e-5ce8d1d8afeb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f29c2fa1-ac97-49d5-8b7e-5ce8d1d8afeb] Running
helpers_test.go:344: "busybox-mount" [f29c2fa1-ac97-49d5-8b7e-5ce8d1d8afeb] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f29c2fa1-ac97-49d5-8b7e-5ce8d1d8afeb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00346185s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-145666 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdany-port3354663234/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdspecific-port472403306/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (255.136214ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:39:12.860634  672823 retry.go:31] will retry after 539.048393ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdspecific-port472403306/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 ssh "sudo umount -f /mount-9p": exit status 1 (240.766323ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-145666 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdspecific-port472403306/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 service list -o json
functional_test.go:1494: Took "889.975033ms" to run "out/minikube-linux-amd64 -p functional-145666 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3297740208/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3297740208/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3297740208/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T" /mount1: exit status 1 (355.401166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:39:14.704618  672823 retry.go:31] will retry after 437.192877ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-145666 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3297740208/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3297740208/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-145666 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3297740208/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32054
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-145666 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32054
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-145666
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-145666
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-145666
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (155.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-517435 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 18:41:45.568824  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-517435 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m35.017045485s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (155.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- rollout status deployment/busybox
E0920 18:42:13.275177  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-517435 -- rollout status deployment/busybox: (7.514937095s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-597vn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-5vhjd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-6x4nm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-597vn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-5vhjd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-6x4nm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-597vn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-5vhjd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-6x4nm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-597vn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-597vn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-5vhjd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-5vhjd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-6x4nm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-517435 -- exec busybox-7dff88458-6x4nm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-517435 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-517435 -v=7 --alsologtostderr: (29.257410669s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-517435 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp testdata/cp-test.txt ha-517435:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3190825908/001/cp-test_ha-517435.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435:/home/docker/cp-test.txt ha-517435-m02:/home/docker/cp-test_ha-517435_ha-517435-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m02 "sudo cat /home/docker/cp-test_ha-517435_ha-517435-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435:/home/docker/cp-test.txt ha-517435-m03:/home/docker/cp-test_ha-517435_ha-517435-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m03 "sudo cat /home/docker/cp-test_ha-517435_ha-517435-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435:/home/docker/cp-test.txt ha-517435-m04:/home/docker/cp-test_ha-517435_ha-517435-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m04 "sudo cat /home/docker/cp-test_ha-517435_ha-517435-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp testdata/cp-test.txt ha-517435-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3190825908/001/cp-test_ha-517435-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m02:/home/docker/cp-test.txt ha-517435:/home/docker/cp-test_ha-517435-m02_ha-517435.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435 "sudo cat /home/docker/cp-test_ha-517435-m02_ha-517435.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m02:/home/docker/cp-test.txt ha-517435-m03:/home/docker/cp-test_ha-517435-m02_ha-517435-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m03 "sudo cat /home/docker/cp-test_ha-517435-m02_ha-517435-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m02:/home/docker/cp-test.txt ha-517435-m04:/home/docker/cp-test_ha-517435-m02_ha-517435-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m04 "sudo cat /home/docker/cp-test_ha-517435-m02_ha-517435-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp testdata/cp-test.txt ha-517435-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3190825908/001/cp-test_ha-517435-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m03:/home/docker/cp-test.txt ha-517435:/home/docker/cp-test_ha-517435-m03_ha-517435.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435 "sudo cat /home/docker/cp-test_ha-517435-m03_ha-517435.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m03:/home/docker/cp-test.txt ha-517435-m02:/home/docker/cp-test_ha-517435-m03_ha-517435-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m02 "sudo cat /home/docker/cp-test_ha-517435-m03_ha-517435-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m03:/home/docker/cp-test.txt ha-517435-m04:/home/docker/cp-test_ha-517435-m03_ha-517435-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m04 "sudo cat /home/docker/cp-test_ha-517435-m03_ha-517435-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp testdata/cp-test.txt ha-517435-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3190825908/001/cp-test_ha-517435-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m04:/home/docker/cp-test.txt ha-517435:/home/docker/cp-test_ha-517435-m04_ha-517435.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435 "sudo cat /home/docker/cp-test_ha-517435-m04_ha-517435.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m04:/home/docker/cp-test.txt ha-517435-m02:/home/docker/cp-test_ha-517435-m04_ha-517435-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m02 "sudo cat /home/docker/cp-test_ha-517435-m04_ha-517435-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 cp ha-517435-m04:/home/docker/cp-test.txt ha-517435-m03:/home/docker/cp-test_ha-517435-m04_ha-517435-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 ssh -n ha-517435-m03 "sudo cat /home/docker/cp-test_ha-517435-m04_ha-517435-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-517435 node stop m02 -v=7 --alsologtostderr: (11.856445714s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr: exit status 7 (665.236729ms)

                                                
                                                
-- stdout --
	ha-517435
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-517435-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-517435-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-517435-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:43:16.152777  740386 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:43:16.152884  740386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:43:16.152891  740386 out.go:358] Setting ErrFile to fd 2...
	I0920 18:43:16.152896  740386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:43:16.153061  740386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:43:16.153245  740386 out.go:352] Setting JSON to false
	I0920 18:43:16.153280  740386 mustload.go:65] Loading cluster: ha-517435
	I0920 18:43:16.153399  740386 notify.go:220] Checking for updates...
	I0920 18:43:16.153707  740386 config.go:182] Loaded profile config "ha-517435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:43:16.153728  740386 status.go:174] checking status of ha-517435 ...
	I0920 18:43:16.154240  740386 cli_runner.go:164] Run: docker container inspect ha-517435 --format={{.State.Status}}
	I0920 18:43:16.173643  740386 status.go:364] ha-517435 host status = "Running" (err=<nil>)
	I0920 18:43:16.173676  740386 host.go:66] Checking if "ha-517435" exists ...
	I0920 18:43:16.173979  740386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-517435
	I0920 18:43:16.191202  740386 host.go:66] Checking if "ha-517435" exists ...
	I0920 18:43:16.191522  740386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:43:16.191626  740386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-517435
	I0920 18:43:16.208483  740386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/ha-517435/id_rsa Username:docker}
	I0920 18:43:16.319990  740386 ssh_runner.go:195] Run: systemctl --version
	I0920 18:43:16.324053  740386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:43:16.334843  740386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:43:16.381751  740386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-20 18:43:16.372401691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:43:16.382373  740386 kubeconfig.go:125] found "ha-517435" server: "https://192.168.49.254:8443"
	I0920 18:43:16.382403  740386 api_server.go:166] Checking apiserver status ...
	I0920 18:43:16.382443  740386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:43:16.392784  740386 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	I0920 18:43:16.400973  740386 api_server.go:182] apiserver freezer: "4:freezer:/docker/ef36cdf6be592f69ecd8459d47789fcc38680e2551bb6be4ec5d69514a120ae6/crio/crio-5ddf96f33af167b7dc8d98d84161fdad7a4a9dcf3eacac3953ca7a9c14cf4445"
	I0920 18:43:16.401052  740386 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ef36cdf6be592f69ecd8459d47789fcc38680e2551bb6be4ec5d69514a120ae6/crio/crio-5ddf96f33af167b7dc8d98d84161fdad7a4a9dcf3eacac3953ca7a9c14cf4445/freezer.state
	I0920 18:43:16.408771  740386 api_server.go:204] freezer state: "THAWED"
	I0920 18:43:16.408795  740386 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 18:43:16.412421  740386 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 18:43:16.412448  740386 status.go:456] ha-517435 apiserver status = Running (err=<nil>)
	I0920 18:43:16.412459  740386 status.go:176] ha-517435 status: &{Name:ha-517435 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:43:16.412473  740386 status.go:174] checking status of ha-517435-m02 ...
	I0920 18:43:16.412763  740386 cli_runner.go:164] Run: docker container inspect ha-517435-m02 --format={{.State.Status}}
	I0920 18:43:16.429373  740386 status.go:364] ha-517435-m02 host status = "Stopped" (err=<nil>)
	I0920 18:43:16.429393  740386 status.go:377] host is not running, skipping remaining checks
	I0920 18:43:16.429398  740386 status.go:176] ha-517435-m02 status: &{Name:ha-517435-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:43:16.429421  740386 status.go:174] checking status of ha-517435-m03 ...
	I0920 18:43:16.429745  740386 cli_runner.go:164] Run: docker container inspect ha-517435-m03 --format={{.State.Status}}
	I0920 18:43:16.445345  740386 status.go:364] ha-517435-m03 host status = "Running" (err=<nil>)
	I0920 18:43:16.445373  740386 host.go:66] Checking if "ha-517435-m03" exists ...
	I0920 18:43:16.445636  740386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-517435-m03
	I0920 18:43:16.461515  740386 host.go:66] Checking if "ha-517435-m03" exists ...
	I0920 18:43:16.461785  740386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:43:16.461819  740386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-517435-m03
	I0920 18:43:16.477486  740386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/ha-517435-m03/id_rsa Username:docker}
	I0920 18:43:16.572128  740386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:43:16.583315  740386 kubeconfig.go:125] found "ha-517435" server: "https://192.168.49.254:8443"
	I0920 18:43:16.583343  740386 api_server.go:166] Checking apiserver status ...
	I0920 18:43:16.583386  740386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:43:16.593510  740386 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	I0920 18:43:16.602248  740386 api_server.go:182] apiserver freezer: "4:freezer:/docker/e458c3c44820117b63389ca96a0b2137826e6ba5fba6f0f03c6507b160118de6/crio/crio-346414866dd9a1a909f672690a31e065fe996a825e2587b375faeefdd31ba167"
	I0920 18:43:16.602306  740386 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e458c3c44820117b63389ca96a0b2137826e6ba5fba6f0f03c6507b160118de6/crio/crio-346414866dd9a1a909f672690a31e065fe996a825e2587b375faeefdd31ba167/freezer.state
	I0920 18:43:16.610581  740386 api_server.go:204] freezer state: "THAWED"
	I0920 18:43:16.610606  740386 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 18:43:16.615722  740386 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 18:43:16.615746  740386 status.go:456] ha-517435-m03 apiserver status = Running (err=<nil>)
	I0920 18:43:16.615754  740386 status.go:176] ha-517435-m03 status: &{Name:ha-517435-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:43:16.615771  740386 status.go:174] checking status of ha-517435-m04 ...
	I0920 18:43:16.616055  740386 cli_runner.go:164] Run: docker container inspect ha-517435-m04 --format={{.State.Status}}
	I0920 18:43:16.633544  740386 status.go:364] ha-517435-m04 host status = "Running" (err=<nil>)
	I0920 18:43:16.633570  740386 host.go:66] Checking if "ha-517435-m04" exists ...
	I0920 18:43:16.633920  740386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-517435-m04
	I0920 18:43:16.651071  740386 host.go:66] Checking if "ha-517435-m04" exists ...
	I0920 18:43:16.651399  740386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:43:16.651452  740386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-517435-m04
	I0920 18:43:16.669536  740386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/ha-517435-m04/id_rsa Username:docker}
	I0920 18:43:16.761179  740386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:43:16.772698  740386 status.go:176] ha-517435-m04 status: &{Name:ha-517435-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 node start m02 -v=7 --alsologtostderr
E0920 18:43:41.556803  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:41.563221  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:41.574634  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:41.596068  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:41.637480  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:41.718932  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:41.880642  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:42.203019  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:42.845391  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:44.127583  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:43:46.689677  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-517435 node start m02 -v=7 --alsologtostderr: (31.568053339s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-517435 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-517435 -v=7 --alsologtostderr
E0920 18:43:51.811079  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:44:02.053362  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:44:22.535231  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-517435 -v=7 --alsologtostderr: (36.52484696s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-517435 --wait=true -v=7 --alsologtostderr
E0920 18:45:03.496716  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:46:25.418196  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-517435 --wait=true -v=7 --alsologtostderr: (2m11.569065169s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-517435
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 node delete m03 -v=7 --alsologtostderr
E0920 18:46:45.570418  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-517435 node delete m03 -v=7 --alsologtostderr: (10.5131084s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-517435 stop -v=7 --alsologtostderr: (35.44198687s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr: exit status 7 (99.172621ms)

                                                
                                                
-- stdout --
	ha-517435
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-517435-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-517435-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:47:26.444850  758070 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:47:26.445132  758070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:47:26.445144  758070 out.go:358] Setting ErrFile to fd 2...
	I0920 18:47:26.445150  758070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:47:26.445330  758070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:47:26.445537  758070 out.go:352] Setting JSON to false
	I0920 18:47:26.445579  758070 mustload.go:65] Loading cluster: ha-517435
	I0920 18:47:26.445696  758070 notify.go:220] Checking for updates...
	I0920 18:47:26.446102  758070 config.go:182] Loaded profile config "ha-517435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:47:26.446127  758070 status.go:174] checking status of ha-517435 ...
	I0920 18:47:26.446553  758070 cli_runner.go:164] Run: docker container inspect ha-517435 --format={{.State.Status}}
	I0920 18:47:26.464496  758070 status.go:364] ha-517435 host status = "Stopped" (err=<nil>)
	I0920 18:47:26.464553  758070 status.go:377] host is not running, skipping remaining checks
	I0920 18:47:26.464562  758070 status.go:176] ha-517435 status: &{Name:ha-517435 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:47:26.464618  758070 status.go:174] checking status of ha-517435-m02 ...
	I0920 18:47:26.464976  758070 cli_runner.go:164] Run: docker container inspect ha-517435-m02 --format={{.State.Status}}
	I0920 18:47:26.483931  758070 status.go:364] ha-517435-m02 host status = "Stopped" (err=<nil>)
	I0920 18:47:26.483950  758070 status.go:377] host is not running, skipping remaining checks
	I0920 18:47:26.483955  758070 status.go:176] ha-517435-m02 status: &{Name:ha-517435-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:47:26.483975  758070 status.go:174] checking status of ha-517435-m04 ...
	I0920 18:47:26.484218  758070 cli_runner.go:164] Run: docker container inspect ha-517435-m04 --format={{.State.Status}}
	I0920 18:47:26.500508  758070 status.go:364] ha-517435-m04 host status = "Stopped" (err=<nil>)
	I0920 18:47:26.500528  758070 status.go:377] host is not running, skipping remaining checks
	I0920 18:47:26.500534  758070 status.go:176] ha-517435-m04 status: &{Name:ha-517435-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (107.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-517435 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 18:48:41.557046  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:49:09.259726  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-517435 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m46.825947774s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (107.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-517435 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-517435 --control-plane -v=7 --alsologtostderr: (1m5.428165839s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-517435 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-356785 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-356785 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m7.69206002s)
--- PASS: TestJSONOutput/start/Command (67.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-356785 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-356785 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-356785 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-356785 --output=json --user=testUser: (5.733437388s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-038238 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-038238 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.416814ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8c5b661-4a71-4699-ba36-b9b210656f45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-038238] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8301a319-37d3-4b82-b2aa-2df191620a62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"d909fc7e-a725-49bf-bda3-e6e3819e27e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79ef9f6a-f57e-41d6-b88b-d23aa56111ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig"}}
	{"specversion":"1.0","id":"6795db62-5913-4fdd-bba8-46895fe965af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube"}}
	{"specversion":"1.0","id":"f9a7fb00-591c-4b1d-85d5-e1704bdbba65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"28fbf166-a5bf-427b-a871-a3d987a0f63e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2cf0fb40-509e-458b-9838-50c74b2117f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-038238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-038238
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-334036 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-334036 --network=: (33.527116744s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-334036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-334036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-334036: (1.993610175s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.54s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-513603 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-513603 --network=bridge: (24.266734025s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-513603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-513603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-513603: (1.811981996s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.10s)

                                                
                                    
x
+
TestKicExistingNetwork (24.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 18:52:49.943960  672823 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 18:52:49.959396  672823 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 18:52:49.959464  672823 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 18:52:49.959484  672823 cli_runner.go:164] Run: docker network inspect existing-network
W0920 18:52:49.974647  672823 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 18:52:49.974679  672823 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 18:52:49.974710  672823 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 18:52:49.974872  672823 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 18:52:49.991609  672823 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5829f6f2df27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:54:f4:25:9c} reservation:<nil>}
I0920 18:52:49.992244  672823 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017a3d60}
I0920 18:52:49.992275  672823 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 18:52:49.992324  672823 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 18:52:50.050800  672823 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-822209 --network=existing-network
E0920 18:53:08.638947  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-822209 --network=existing-network: (22.214612195s)
helpers_test.go:175: Cleaning up "existing-network-822209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-822209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-822209: (1.799391572s)
I0920 18:53:14.080616  672823 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.15s)

                                                
                                    
x
+
TestKicCustomSubnet (26.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-393588 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-393588 --subnet=192.168.60.0/24: (24.529670215s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-393588 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-393588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-393588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-393588: (2.047854831s)
--- PASS: TestKicCustomSubnet (26.59s)

                                                
                                    
x
+
TestKicStaticIP (24.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-422404 --static-ip=192.168.200.200
E0920 18:53:41.556882  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-422404 --static-ip=192.168.200.200: (22.221971198s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-422404 ip
helpers_test.go:175: Cleaning up "static-ip-422404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-422404
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-422404: (2.030461741s)
--- PASS: TestKicStaticIP (24.37s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-430991 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-430991 --driver=docker  --container-runtime=crio: (24.130368005s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-452518 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-452518 --driver=docker  --container-runtime=crio: (20.673771987s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-430991
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-452518
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-452518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-452518
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-452518: (1.825621056s)
helpers_test.go:175: Cleaning up "first-430991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-430991
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-430991: (2.176900932s)
--- PASS: TestMinikubeProfile (49.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-275358 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-275358 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.265052226s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-275358 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-300644 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-300644 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.072655124s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-300644 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-275358 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-275358 --alsologtostderr -v=5: (1.623820365s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-300644 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-300644
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-300644: (1.166801609s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-300644
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-300644: (7.007856953s)
--- PASS: TestMountStart/serial/RestartStopped (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-300644 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-369280 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 18:56:45.568579  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-369280 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m35.125013847s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-369280 -- rollout status deployment/busybox: (4.578772065s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-k764v -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-lcw47 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-k764v -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-lcw47 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-k764v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-lcw47 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-k764v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-k764v -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-lcw47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-369280 -- exec busybox-7dff88458-lcw47 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-369280 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-369280 -v 3 --alsologtostderr: (25.642345422s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-369280 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp testdata/cp-test.txt multinode-369280:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2428228961/001/cp-test_multinode-369280.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280:/home/docker/cp-test.txt multinode-369280-m02:/home/docker/cp-test_multinode-369280_multinode-369280-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m02 "sudo cat /home/docker/cp-test_multinode-369280_multinode-369280-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280:/home/docker/cp-test.txt multinode-369280-m03:/home/docker/cp-test_multinode-369280_multinode-369280-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m03 "sudo cat /home/docker/cp-test_multinode-369280_multinode-369280-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp testdata/cp-test.txt multinode-369280-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2428228961/001/cp-test_multinode-369280-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280-m02:/home/docker/cp-test.txt multinode-369280:/home/docker/cp-test_multinode-369280-m02_multinode-369280.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280 "sudo cat /home/docker/cp-test_multinode-369280-m02_multinode-369280.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280-m02:/home/docker/cp-test.txt multinode-369280-m03:/home/docker/cp-test_multinode-369280-m02_multinode-369280-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m03 "sudo cat /home/docker/cp-test_multinode-369280-m02_multinode-369280-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp testdata/cp-test.txt multinode-369280-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2428228961/001/cp-test_multinode-369280-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280-m03:/home/docker/cp-test.txt multinode-369280:/home/docker/cp-test_multinode-369280-m03_multinode-369280.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280 "sudo cat /home/docker/cp-test_multinode-369280-m03_multinode-369280.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 cp multinode-369280-m03:/home/docker/cp-test.txt multinode-369280-m02:/home/docker/cp-test_multinode-369280-m03_multinode-369280-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 ssh -n multinode-369280-m02 "sudo cat /home/docker/cp-test_multinode-369280-m03_multinode-369280-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-369280 node stop m03: (1.169852594s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-369280 status: exit status 7 (458.49231ms)

                                                
                                                
-- stdout --
	multinode-369280
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-369280-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-369280-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-369280 status --alsologtostderr: exit status 7 (449.389224ms)

                                                
                                                
-- stdout --
	multinode-369280
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-369280-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-369280-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:57:43.534686  824185 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:57:43.534804  824185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:57:43.534812  824185 out.go:358] Setting ErrFile to fd 2...
	I0920 18:57:43.534816  824185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:57:43.535028  824185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:57:43.535199  824185 out.go:352] Setting JSON to false
	I0920 18:57:43.535232  824185 mustload.go:65] Loading cluster: multinode-369280
	I0920 18:57:43.535349  824185 notify.go:220] Checking for updates...
	I0920 18:57:43.535643  824185 config.go:182] Loaded profile config "multinode-369280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:57:43.535664  824185 status.go:174] checking status of multinode-369280 ...
	I0920 18:57:43.536136  824185 cli_runner.go:164] Run: docker container inspect multinode-369280 --format={{.State.Status}}
	I0920 18:57:43.555691  824185 status.go:364] multinode-369280 host status = "Running" (err=<nil>)
	I0920 18:57:43.555740  824185 host.go:66] Checking if "multinode-369280" exists ...
	I0920 18:57:43.556078  824185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-369280
	I0920 18:57:43.573001  824185 host.go:66] Checking if "multinode-369280" exists ...
	I0920 18:57:43.573297  824185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:57:43.573348  824185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-369280
	I0920 18:57:43.589702  824185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/multinode-369280/id_rsa Username:docker}
	I0920 18:57:43.680065  824185 ssh_runner.go:195] Run: systemctl --version
	I0920 18:57:43.684073  824185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:57:43.693922  824185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:57:43.739214  824185 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-20 18:57:43.729848804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:57:43.739795  824185 kubeconfig.go:125] found "multinode-369280" server: "https://192.168.67.2:8443"
	I0920 18:57:43.739824  824185 api_server.go:166] Checking apiserver status ...
	I0920 18:57:43.739855  824185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:57:43.750074  824185 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1513/cgroup
	I0920 18:57:43.758450  824185 api_server.go:182] apiserver freezer: "4:freezer:/docker/36bb0e2419f9608262ddb577373fda19dbbb586b147b0a6a4cda27b68d0c797d/crio/crio-c6a6afd4f324d39c22611dc2b2cd70e50dd2db862cf652f377c1aac048d36b72"
	I0920 18:57:43.758541  824185 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/36bb0e2419f9608262ddb577373fda19dbbb586b147b0a6a4cda27b68d0c797d/crio/crio-c6a6afd4f324d39c22611dc2b2cd70e50dd2db862cf652f377c1aac048d36b72/freezer.state
	I0920 18:57:43.766079  824185 api_server.go:204] freezer state: "THAWED"
	I0920 18:57:43.766102  824185 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 18:57:43.770528  824185 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 18:57:43.770553  824185 status.go:456] multinode-369280 apiserver status = Running (err=<nil>)
	I0920 18:57:43.770566  824185 status.go:176] multinode-369280 status: &{Name:multinode-369280 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:57:43.770592  824185 status.go:174] checking status of multinode-369280-m02 ...
	I0920 18:57:43.770814  824185 cli_runner.go:164] Run: docker container inspect multinode-369280-m02 --format={{.State.Status}}
	I0920 18:57:43.786664  824185 status.go:364] multinode-369280-m02 host status = "Running" (err=<nil>)
	I0920 18:57:43.786687  824185 host.go:66] Checking if "multinode-369280-m02" exists ...
	I0920 18:57:43.786919  824185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-369280-m02
	I0920 18:57:43.803561  824185 host.go:66] Checking if "multinode-369280-m02" exists ...
	I0920 18:57:43.803822  824185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:57:43.803857  824185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-369280-m02
	I0920 18:57:43.820856  824185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19678-664237/.minikube/machines/multinode-369280-m02/id_rsa Username:docker}
	I0920 18:57:43.912136  824185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:57:43.922856  824185 status.go:176] multinode-369280-m02 status: &{Name:multinode-369280-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:57:43.922904  824185 status.go:174] checking status of multinode-369280-m03 ...
	I0920 18:57:43.923193  824185 cli_runner.go:164] Run: docker container inspect multinode-369280-m03 --format={{.State.Status}}
	I0920 18:57:43.939132  824185 status.go:364] multinode-369280-m03 host status = "Stopped" (err=<nil>)
	I0920 18:57:43.939153  824185 status.go:377] host is not running, skipping remaining checks
	I0920 18:57:43.939159  824185 status.go:176] multinode-369280-m03 status: &{Name:multinode-369280-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-369280 node start m03 -v=7 --alsologtostderr: (8.38632134s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-369280
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-369280
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-369280: (24.64856615s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-369280 --wait=true -v=8 --alsologtostderr
E0920 18:58:41.556983  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-369280 --wait=true -v=8 --alsologtostderr: (54.979463503s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-369280
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-369280 node delete m03: (4.334132449s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-369280 stop: (23.518125852s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-369280 status: exit status 7 (77.452651ms)

                                                
                                                
-- stdout --
	multinode-369280
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-369280-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-369280 status --alsologtostderr: exit status 7 (79.543635ms)

                                                
                                                
-- stdout --
	multinode-369280
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-369280-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:59:41.215868  833487 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:59:41.215979  833487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:59:41.215996  833487 out.go:358] Setting ErrFile to fd 2...
	I0920 18:59:41.216003  833487 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:59:41.216220  833487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 18:59:41.216433  833487 out.go:352] Setting JSON to false
	I0920 18:59:41.216467  833487 mustload.go:65] Loading cluster: multinode-369280
	I0920 18:59:41.216576  833487 notify.go:220] Checking for updates...
	I0920 18:59:41.216927  833487 config.go:182] Loaded profile config "multinode-369280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:59:41.216949  833487 status.go:174] checking status of multinode-369280 ...
	I0920 18:59:41.217414  833487 cli_runner.go:164] Run: docker container inspect multinode-369280 --format={{.State.Status}}
	I0920 18:59:41.235406  833487 status.go:364] multinode-369280 host status = "Stopped" (err=<nil>)
	I0920 18:59:41.235434  833487 status.go:377] host is not running, skipping remaining checks
	I0920 18:59:41.235444  833487 status.go:176] multinode-369280 status: &{Name:multinode-369280 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:59:41.235483  833487 status.go:174] checking status of multinode-369280-m02 ...
	I0920 18:59:41.235763  833487 cli_runner.go:164] Run: docker container inspect multinode-369280-m02 --format={{.State.Status}}
	I0920 18:59:41.253596  833487 status.go:364] multinode-369280-m02 host status = "Stopped" (err=<nil>)
	I0920 18:59:41.253620  833487 status.go:377] host is not running, skipping remaining checks
	I0920 18:59:41.253633  833487 status.go:176] multinode-369280-m02 status: &{Name:multinode-369280-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-369280 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 19:00:04.621798  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-369280 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (57.765809275s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-369280 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-369280
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-369280-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-369280-m02 --driver=docker  --container-runtime=crio: exit status 14 (62.284717ms)

                                                
                                                
-- stdout --
	* [multinode-369280-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-369280-m02' is duplicated with machine name 'multinode-369280-m02' in profile 'multinode-369280'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-369280-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-369280-m03 --driver=docker  --container-runtime=crio: (23.829923169s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-369280
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-369280: exit status 80 (261.461976ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-369280 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-369280-m03 already exists in multinode-369280-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-369280-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-369280-m03: (1.806301581s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.00s)

                                                
                                    
x
+
TestPreload (115.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-981382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0920 19:01:45.568164  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-981382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m18.106149076s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-981382 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-981382 image pull gcr.io/k8s-minikube/busybox: (3.516335178s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-981382
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-981382: (5.701867547s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-981382 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-981382 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (26.39738134s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-981382 image list
helpers_test.go:175: Cleaning up "test-preload-981382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-981382
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-981382: (1.922377466s)
--- PASS: TestPreload (115.89s)

                                                
                                    
x
+
TestScheduledStopUnix (99.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-533789 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-533789 --memory=2048 --driver=docker  --container-runtime=crio: (23.705947493s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-533789 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-533789 -n scheduled-stop-533789
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-533789 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 19:03:29.405769  672823 retry.go:31] will retry after 90.301µs: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.406030  672823 retry.go:31] will retry after 131.588µs: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.407172  672823 retry.go:31] will retry after 325.16µs: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.408339  672823 retry.go:31] will retry after 385.933µs: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.409494  672823 retry.go:31] will retry after 371.056µs: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.410660  672823 retry.go:31] will retry after 1.017154ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.411824  672823 retry.go:31] will retry after 714.873µs: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.412959  672823 retry.go:31] will retry after 1.267442ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.415186  672823 retry.go:31] will retry after 2.802465ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.418497  672823 retry.go:31] will retry after 3.125496ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.422769  672823 retry.go:31] will retry after 6.363859ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.430001  672823 retry.go:31] will retry after 5.474663ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.436307  672823 retry.go:31] will retry after 12.521903ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.449570  672823 retry.go:31] will retry after 20.146504ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
I0920 19:03:29.470862  672823 retry.go:31] will retry after 33.137234ms: open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/scheduled-stop-533789/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-533789 --cancel-scheduled
E0920 19:03:41.557590  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-533789 -n scheduled-stop-533789
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-533789
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-533789 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-533789
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-533789: exit status 7 (62.698883ms)

                                                
                                                
-- stdout --
	scheduled-stop-533789
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-533789 -n scheduled-stop-533789
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-533789 -n scheduled-stop-533789: exit status 7 (62.193943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-533789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-533789
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-533789: (4.867625008s)
--- PASS: TestScheduledStopUnix (99.84s)

                                                
                                    
x
+
TestInsufficientStorage (12.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-919104 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-919104 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.448611925s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2d84fff2-3f51-48ce-8513-f2c9d29fa246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-919104] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6cd819db-2b21-42b5-b639-54a113adcc22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"2fd579ad-b088-4495-bcf7-26ddb57fead3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5379ece3-b317-423d-a06a-9f3117923392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig"}}
	{"specversion":"1.0","id":"e540cd5f-8def-42df-b421-08530d865361","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube"}}
	{"specversion":"1.0","id":"0dab3042-7922-49d5-9f81-2a3456fe03a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bfc66ca5-b536-4363-82f1-9808a100c411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9b47f477-75fa-487b-894d-b4b7d93c8ee7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5957cb69-d05a-44df-810a-13026ebccb87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ebf785a0-dba4-4a23-912e-f9bdcd413f06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1afb49dc-4f9b-494f-b9cb-799627444335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"17d987ee-c976-4930-a42f-00f53159e0a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-919104\" primary control-plane node in \"insufficient-storage-919104\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"53c58968-ea96-4feb-a430-8e6c8a965ea1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b48e657c-14fe-41dc-adb5-0fb41c76fa7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4dd8379a-3340-41f7-ba08-24070d26940b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-919104 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-919104 --output=json --layout=cluster: exit status 7 (258.128185ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-919104","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-919104","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:04:55.846348  855885 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-919104" does not appear in /home/jenkins/minikube-integration/19678-664237/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-919104 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-919104 --output=json --layout=cluster: exit status 7 (252.520085ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-919104","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-919104","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:04:56.099270  855983 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-919104" does not appear in /home/jenkins/minikube-integration/19678-664237/kubeconfig
	E0920 19:04:56.109027  855983 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/insufficient-storage-919104/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-919104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-919104
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-919104: (1.776406275s)
--- PASS: TestInsufficientStorage (12.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2506069820 start -p running-upgrade-040927 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2506069820 start -p running-upgrade-040927 --memory=2200 --vm-driver=docker  --container-runtime=crio: (31.898773094s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-040927 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-040927 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.244026049s)
helpers_test.go:175: Cleaning up "running-upgrade-040927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-040927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-040927: (5.245338217s)
--- PASS: TestRunningBinaryUpgrade (74.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (336.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-250211 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-250211 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.939374928s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-250211
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-250211: (1.191750171s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-250211 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-250211 status --format={{.Host}}: exit status 7 (74.761875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-250211 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-250211 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.010280632s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-250211 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-250211 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-250211 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (68.75629ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-250211] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-250211
	    minikube start -p kubernetes-upgrade-250211 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2502112 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-250211 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-250211 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-250211 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.892985297s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-250211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-250211
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-250211: (2.568409838s)
--- PASS: TestKubernetesUpgrade (336.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (104.59s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1569980139 start -p missing-upgrade-426324 --memory=2200 --driver=docker  --container-runtime=crio
E0920 19:06:45.568276  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1569980139 start -p missing-upgrade-426324 --memory=2200 --driver=docker  --container-runtime=crio: (31.035372597s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-426324
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-426324: (16.215042736s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-426324
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-426324 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-426324 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.879931266s)
helpers_test.go:175: Cleaning up "missing-upgrade-426324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-426324
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-426324: (1.897892292s)
--- PASS: TestMissingContainerUpgrade (104.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865097 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-865097 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (76.927261ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-865097] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865097 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-865097 --driver=docker  --container-runtime=crio: (31.29201896s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-865097 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1726720514 start -p stopped-upgrade-884768 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1726720514 start -p stopped-upgrade-884768 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m19.838227299s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1726720514 -p stopped-upgrade-884768 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1726720514 -p stopped-upgrade-884768 stop: (2.3854622s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-884768 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-884768 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.87782904s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-852741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-852741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (165.165487ms)

                                                
                                                
-- stdout --
	* [false-852741] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:05:01.476021  858065 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:05:01.476134  858065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:05:01.476144  858065 out.go:358] Setting ErrFile to fd 2...
	I0920 19:05:01.476152  858065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:05:01.476452  858065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-664237/.minikube/bin
	I0920 19:05:01.477346  858065 out.go:352] Setting JSON to false
	I0920 19:05:01.478883  858065 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10045,"bootTime":1726849056,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 19:05:01.479055  858065 start.go:139] virtualization: kvm guest
	I0920 19:05:01.481760  858065 out.go:177] * [false-852741] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 19:05:01.483553  858065 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:05:01.483557  858065 notify.go:220] Checking for updates...
	I0920 19:05:01.485196  858065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:05:01.486766  858065 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-664237/kubeconfig
	I0920 19:05:01.488655  858065 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-664237/.minikube
	I0920 19:05:01.490255  858065 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 19:05:01.491944  858065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:05:01.494276  858065 config.go:182] Loaded profile config "NoKubernetes-865097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:01.494462  858065 config.go:182] Loaded profile config "offline-crio-822485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:05:01.494592  858065 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:05:01.526949  858065 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:05:01.527069  858065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:05:01.579639  858065 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-20 19:05:01.568130604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 19:05:01.579815  858065 docker.go:318] overlay module found
	I0920 19:05:01.582781  858065 out.go:177] * Using the docker driver based on user configuration
	I0920 19:05:01.584290  858065 start.go:297] selected driver: docker
	I0920 19:05:01.584304  858065 start.go:901] validating driver "docker" against <nil>
	I0920 19:05:01.584316  858065 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:05:01.587052  858065 out.go:201] 
	W0920 19:05:01.588591  858065 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0920 19:05:01.589921  858065 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-852741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-852741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-852741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-852741"

                                                
                                                
----------------------- debugLogs end: false-852741 [took: 6.317988739s] --------------------------------
helpers_test.go:175: Cleaning up "false-852741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-852741
--- PASS: TestNetworkPlugins/group/false (6.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865097 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-865097 --no-kubernetes --driver=docker  --container-runtime=crio: (7.213712073s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-865097 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-865097 status -o json: exit status 2 (290.313178ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-865097","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-865097
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-865097: (1.970656537s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865097 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-865097 --no-kubernetes --driver=docker  --container-runtime=crio: (5.677063041s)
--- PASS: TestNoKubernetes/serial/Start (5.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-865097 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-865097 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.099615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-865097
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-865097: (2.837681904s)
--- PASS: TestNoKubernetes/serial/Stop (2.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865097 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-865097 --driver=docker  --container-runtime=crio: (9.720575446s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-865097 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-865097 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.935245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-884768
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestPause/serial/Start (78.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-640240 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-640240 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m18.367866596s)
--- PASS: TestPause/serial/Start (78.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0920 19:08:41.556722  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m11.590280183s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (23.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-640240 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-640240 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.742576846s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (23.76s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-640240 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-852741 "pgrep -a kubelet"
I0920 19:09:24.256274  672823 config.go:182] Loaded profile config "auto-852741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-852741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4lt8p" [7188b77c-ef9b-4a6f-96a8-52fb0fedac73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4lt8p" [7188b77c-ef9b-4a6f-96a8-52fb0fedac73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004790676s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-640240 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-640240 --output=json --layout=cluster: exit status 2 (316.86425ms)

                                                
                                                
-- stdout --
	{"Name":"pause-640240","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-640240","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-640240 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-640240 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.68s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-640240 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-640240 --alsologtostderr -v=5: (2.67782991s)
--- PASS: TestPause/serial/DeletePaused (2.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m9.196006152s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-640240
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-640240: exit status 1 (22.019321ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-640240: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (58.479758275s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-852741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.795915834s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5hkcg" [ba9e6290-8622-47b8-b3f8-9242d280e401] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004441928s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-852741 "pgrep -a kubelet"
I0920 19:10:34.719502  672823 config.go:182] Loaded profile config "calico-852741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-852741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vmrrt" [fbe06e6b-96ac-4efb-9841-f728336dd58a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vmrrt" [fbe06e6b-96ac-4efb-9841-f728336dd58a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004230214s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8gdtl" [d39accba-09aa-4541-8c93-bce71d06bb8d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003461387s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-852741 "pgrep -a kubelet"
I0920 19:10:43.907847  672823 config.go:182] Loaded profile config "kindnet-852741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-852741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bznmd" [17ad4918-a7df-4c60-90a6-08bc4172a4ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bznmd" [17ad4918-a7df-4c60-90a6-08bc4172a4ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003921799s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-852741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-852741 exec deployment/netcat -- nslookup kubernetes.default
I0920 19:10:45.000755  672823 config.go:182] Loaded profile config "custom-flannel-852741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-852741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5h2tb" [322ddf73-8c0f-4eb3-88f1-9624bfcff0d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5h2tb" [322ddf73-8c0f-4eb3-88f1-9624bfcff0d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004301507s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-852741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-852741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.288095782s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.867496138s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0920 19:11:45.568462  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-852741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m13.171051292s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-sw2wh" [a5c1b43b-d127-4161-9ca8-46a6331ac1d9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00403049s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-852741 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-852741 "pgrep -a kubelet"
I0920 19:12:11.826923  672823 config.go:182] Loaded profile config "flannel-852741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-852741 replace --force -f testdata/netcat-deployment.yaml
I0920 19:12:11.972831  672823 config.go:182] Loaded profile config "enable-default-cni-852741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-77q8v" [c84ea145-e246-42ee-98c4-250213ea2bf2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-77q8v" [c84ea145-e246-42ee-98c4-250213ea2bf2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003840844s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-852741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-llpxr" [b963e68d-0a04-4692-ae99-88c2fe7a61ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-llpxr" [b963e68d-0a04-4692-ae99-88c2fe7a61ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004117401s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-852741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-852741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-852741 "pgrep -a kubelet"
I0920 19:12:29.013297  672823 config.go:182] Loaded profile config "bridge-852741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-852741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6jq94" [06527f3c-a808-45c5-84e7-9b398bae9ce7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6jq94" [06527f3c-a808-45c5-84e7-9b398bae9ce7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003999274s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-232943 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-232943 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m29.778146837s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-852741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-852741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-841524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-841524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m3.670241637s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-608709 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-608709 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m16.753379741s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-906835 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 19:13:41.556244  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-906835 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (45.293641043s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-841524 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ef2445c-fe99-442a-9d97-0711bd174184] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ef2445c-fe99-442a-9d97-0711bd174184] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.003731927s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-841524 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-906835 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0c9e4861-49ef-41d2-aa2e-66830023808c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0c9e4861-49ef-41d2-aa2e-66830023808c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003996841s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-906835 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-906835 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-906835 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-906835 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-906835 --alsologtostderr -v=3: (11.934883579s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-841524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-841524 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-841524 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-841524 --alsologtostderr -v=3: (11.848128494s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-608709 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f1a10818-7986-4e39-9a56-d4a293a4e51b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f1a10818-7986-4e39-9a56-d4a293a4e51b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.005121669s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-608709 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835: exit status 7 (66.775288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-906835 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-906835 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-906835 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.432770823s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-841524 -n no-preload-841524
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-841524 -n no-preload-841524: exit status 7 (63.339059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-841524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-841524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-841524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.733234013s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-841524 -n no-preload-841524
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-608709 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-608709 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-608709 --alsologtostderr -v=3
E0920 19:14:24.476592  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:24.483036  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:24.494449  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:24.515820  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:24.557220  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:24.638698  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:24.800813  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-608709 --alsologtostderr -v=3: (12.056041009s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-608709 -n embed-certs-608709
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-608709 -n embed-certs-608709: exit status 7 (79.873024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-608709 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (264.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-608709 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 19:14:25.122610  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:25.764207  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:27.045810  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:29.607777  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:34.729335  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:14:44.971174  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-608709 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m24.001460351s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-608709 -n embed-certs-608709
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (264.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-232943 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4b54deec-393d-4c29-bc1d-826f3d3170dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4b54deec-393d-4c29-bc1d-826f3d3170dd] Running
E0920 19:15:05.453480  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003182886s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-232943 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-232943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-232943 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-232943 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-232943 --alsologtostderr -v=3: (11.95326672s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-232943 -n old-k8s-version-232943
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-232943 -n old-k8s-version-232943: exit status 7 (66.397308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-232943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (139.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-232943 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0920 19:15:28.461602  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:28.467959  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:28.479774  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:28.501271  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:28.542899  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:28.624603  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:28.786053  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:29.108188  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:29.749821  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:31.031176  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:33.592724  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.654522  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.660917  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.672312  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.693720  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.735207  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.817573  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.979329  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:38.301336  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:38.714758  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:38.943402  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:40.225404  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:42.787002  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:45.220585  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:45.226994  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:45.238367  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:45.259770  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:45.301212  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:45.383289  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:45.544873  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:45.866929  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:46.415152  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:46.508880  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:47.791203  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:47.908730  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:48.956256  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:50.353054  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:55.475133  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:58.150323  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:05.717460  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:09.437646  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:18.632070  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:26.198833  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:44.624044  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:45.567804  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/addons-162403/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:50.399800  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:59.594230  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:05.551590  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:05.557995  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:05.569432  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:05.590911  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:05.632353  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:05.713837  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:05.875504  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:06.197319  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:06.839525  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:07.160156  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:08.121329  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:08.337003  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/auto-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:10.682801  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:12.143107  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:12.149580  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:12.161020  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:12.182425  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:12.223919  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:12.306191  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:12.467780  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:12.789490  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:13.431607  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:14.713621  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:15.804498  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:17.275123  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:22.397016  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:26.046808  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:29.236701  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:29.243026  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:29.254421  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:29.275848  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:29.317289  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:29.398786  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:29.560207  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:29.882136  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:30.523766  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:31.805708  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:32.638444  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:34.367127  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:39.489189  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-232943 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m19.325038682s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-232943 -n old-k8s-version-232943
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (139.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s2lgp" [151b7eac-8884-4a99-8950-4329257b4668] Running
E0920 19:17:46.528695  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004107019s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s2lgp" [151b7eac-8884-4a99-8950-4329257b4668] Running
E0920 19:17:49.730824  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:17:53.120115  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004235772s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-232943 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-232943 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-232943 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-232943 -n old-k8s-version-232943
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-232943 -n old-k8s-version-232943: exit status 2 (282.53239ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-232943 -n old-k8s-version-232943
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-232943 -n old-k8s-version-232943: exit status 2 (286.391471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-232943 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-232943 -n old-k8s-version-232943
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-232943 -n old-k8s-version-232943
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-760477 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 19:18:10.212838  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/bridge-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:12.322046  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/calico-852741/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:21.516100  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/kindnet-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-760477 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (27.607459808s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-760477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0920 19:18:27.490667  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-760477 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-760477 --alsologtostderr -v=3: (1.199764194s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-760477 -n newest-cni-760477
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-760477 -n newest-cni-760477: exit status 7 (63.205366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-760477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 19:18:29.082495  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/custom-flannel-852741/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-760477 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-760477 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (13.318438042s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-760477 -n newest-cni-760477
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2lmtr" [ca6e09bd-d8a3-4a46-8cdc-fc5b2326a6f4] Running
E0920 19:18:34.082281  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/enable-default-cni-852741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003834194s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-68fcj" [0edbd4d5-947d-494d-b85e-95450af25227] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003678736s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2lmtr" [ca6e09bd-d8a3-4a46-8cdc-fc5b2326a6f4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004959714s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-906835 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-68fcj" [0edbd4d5-947d-494d-b85e-95450af25227] Running
E0920 19:18:41.556926  672823 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-664237/.minikube/profiles/functional-145666/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004580402s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-841524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-760477 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-760477 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-760477 -n newest-cni-760477
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-760477 -n newest-cni-760477: exit status 2 (281.044989ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-760477 -n newest-cni-760477
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-760477 -n newest-cni-760477: exit status 2 (301.824802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-760477 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-760477 -n newest-cni-760477
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-760477 -n newest-cni-760477
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-906835 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-906835 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835: exit status 2 (348.593868ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835: exit status 2 (353.271618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-906835 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-906835 -n default-k8s-diff-port-906835
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-841524 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-841524 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-841524 --alsologtostderr -v=1: (1.033771521s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-841524 -n no-preload-841524
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-841524 -n no-preload-841524: exit status 2 (315.828222ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-841524 -n no-preload-841524
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-841524 -n no-preload-841524: exit status 2 (308.692745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-841524 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-841524 -n no-preload-841524
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-841524 -n no-preload-841524
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7f7r9" [fcbfd422-dd72-489b-9192-0528cdd6f8af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003952099s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7f7r9" [fcbfd422-dd72-489b-9192-0528cdd6f8af] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003176686s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-608709 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-608709 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-608709 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-608709 -n embed-certs-608709
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-608709 -n embed-certs-608709: exit status 2 (288.56643ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-608709 -n embed-certs-608709
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-608709 -n embed-certs-608709: exit status 2 (279.95513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-608709 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-608709 -n embed-certs-608709
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-608709 -n embed-certs-608709
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    

Test skip (25/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-852741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-852741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-852741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-852741"

                                                
                                                
----------------------- debugLogs end: kubenet-852741 [took: 3.375813478s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-852741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-852741
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-852741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-852741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-852741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-852741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852741"

                                                
                                                
----------------------- debugLogs end: cilium-852741 [took: 3.360101228s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-852741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-852741
--- SKIP: TestNetworkPlugins/group/cilium (3.52s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-761044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-761044
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard